Research Technology (ResTech)

December 20, 2011

Is Eye-Tracking Making us Blind and Other Research Maladies

Do marketers and marketing researchers understand and appreciate the boundaries and appropriate uses of new research techniques?

Steve Needel,

by Steve Needel,

0

 

By Steve Needel, PhD

I recently participated in an online discussion, one that is all too common these days, about the fate of marketing research; how did we get here? Why won’t they listen to me? nobody appreciates me – the usual. The tenor of the discussion made me think that if we were all Buddhists, our rate of self-immolation would be alarming. I am a believer that our fate and our corporate success as researchers are in our own hands, as I have previously written (2006, 2008). I raise the question of whether eye-tracking is making us blind, along with the related questions of whether neuroscience has made us stupid and have micro-expressions made us heartless. These research tools are getting a lot of buzz these days, much of it generated by their providers. I’m not sure these techniques are what we think they are and I am sure they are often not what they purport to be. We are seeing among marketers and marketing researchers an increasing lack of understanding and/or appreciation about the boundaries and appropriate uses of these new techniques – hence the purpose of this paper.

Let me start with a recent example from my own company. A client decided to use eye-tracking for a packaging test rather than go with us and test the new package in virtual reality. We thought this was strange because the client usually commissions us to define the sales impact of their package changes. In this case, their brief was very specific about this action standard. When I asked why they changed their decision criteria from sales to attention-getting, the response was that they thought they were the same thing. The research supplier they chose was happy to let them believe that.

Reading the published literature on eye-tracking (mostly by Drs. Wedel, Peters, and Chandon), there is no evidence of a relationship between attention and sales beyond the elementary “if you don’t see it, you can’t buy it”. This is literature put out both by academicians and the companies that sell eye-tracking research (for example, Perception Research Services). This doesn’t mean eye-tracking is a bad tool – it only means that it doesn’t tell you what will happen to your sales with one package or another. Eye-tracking was originally designed for print media, extended now to  webpage viewing analysis, and for package testing, where the idea is to understand what someone pays attention to on a package (and perhaps in what order they attend to the different elements). As such, it is a great tool for designers to see if what they are building is attracting the kind of attention they want to attract. Mobile eye-tracking lets us get out of the lab and into the real world. Whether shoppers are paying attention to displays or point-of-purchase material is interesting, although I would contend that the question is secondary to whether displays or P-O-P material sell more stuff. When we move into eye-tracking’s newer applications, such as shelf set research, the technique becomes irrelevant or, at best, redundant with other information we have. In virtual reality, for example, we have yet to see where eye-tracking adds any information over whether shoppers pick up and look at a product – it is non-differentiating. Eye-tracking should be used for what it was intended to do; extending its application may be misleading.

Neuromarketing is another area that gets much more attention and much more credit than it deserves, a point well made by Dr. Deanna Weissberg of Rutgers University. There is no published, replicable evidence, that increased neural activity in humans relates to much of anything when it comes to decisions or preferences. David Penn’s recent article in Research World a few months back makes it clear that the technique may be interesting, but that neuroscientists have yet to figure out what it means. The Advertising Research Foundation’s 2009 review of Innerscope finds support for using neuroscience to understand broad emotional response, but they also suggest that the “single-method” approach of FMRI or the EEG headbands are not going to be sufficient, in the same way that in the past GSR or pulse or respiration rate have not lead us to make great strides in understanding physiological responses to stimuli. Here’s where we can easily be led astray: we shouldn’t expect people to have emotional responses to most of the products we sell in the CPG world. It’s a breakfast cereal, after all, not a life-long commitment! You need to generate a pretty strong emotional response to show up above baseline neurological activity – commercials by their very definition might do this, but soft drinks? We don’t think so, and no published studies have validated the claims made by some of the promoters of neuroscience for marketing research. Indeed, recent claims by Martin Lindstrom regarding the neuropsychologically compelling nature of the IPhone have drawn criticism in the New York Times from leading neuroscientists (October 4, 2011). And don’t be swayed by the “emotion drives 95% of purchases” claims floating around – habit drives 95% of the purchases in the CPG world (or thereabouts).

A recent Linked-In article (posted by the research agency’s PR firm) is touting the use of micro-expressions to measure consumer’s emotional response to products. The rationale is that emotions, rather than logic, drive purchasing, according to the CEO of the supplier being quoted. And the discussion would have us believe that we as an industry are not sufficiently adept at eliciting emotions without expensive and time-consuming facial analysis. The writer states that, “the science is complex and the ability to analyze human responses is held by only a few trained individuals.” This, of course, would justify high prices for this research agency and gives it that mystique, that patina of “well it must be good if it’s expensive and time consuming and nobody else can do it” that many of us fall for all too often. There are so many talking points to bring up, I hardly know where to start. This is not a unique and highly specialized research tool – anyone can go on Paul Ekman’s website and get fully trained for about US$70 and he claims to be able to teach you facial recognition in about 40 minutes. Understanding micro-expressions does not have a well-defined protocol, in the sense that parsing emotions into the fine grain one would expect is not done in facial analysis; you are looking at seven basic emotions. We might ask whether, if a researcher can’t measure the difference between a positive and a negative reaction to a product or a concept quickly and inexpensively, should they even be in the marketing research business? Do we think that our research participants are lying to us at such a high rate that we need to be constantly on the lookout for deception? I would contend that if we don’t trust our participants, we shouldn’t be asking them questions. I do recognize that participants aren’t always good at telling us what they are thinking or feeling, but that is a problem with what and how we are asking them rather than they are lying to us. Micro-expressions made for a mediocre television show and have all the promise of being a mediocre marketing research technology.

In a recent NewMR radio show, I was asked why these new technologies appear to be so popular if there is little science behind them. By science, I mean a demonstrable, reliable phenomenon that is linked to something measurable related to marketing actions. Linking eye-tracking to sales, increased blood flow in the brain to a valenced emotion, linking physiological arousal to affect, or showing that micro-expressions significantly improves our understanding of a person’s response to a product or offer would all be good science; little to none of this exists. Popularity can be explained, in part, because of our inferiority complex; researchers remain so concerned with getting that seat at the table that the sizzle and sexiness of these new tools outweighs their lack of validation. Rather than earning that seat through helping our companies do what they do better, we try to look deep, cutting edge, insightful, all to justify our presence. The problem is, we often fail at that – it’s not easy being deep and insightful. When we don’t get the payout from these hyped technologies, the fall is that much harder. Fortunately for us, marketers may be even more susceptible to the allure of high tech than we are, meaning they’ll buy into the next idea that we bring up almost as quickly.

I’d like to see us use these tools in the appropriate situations for the appropriate types of research questions they were designed for. I’d like research buyers to demand some validation of the output (indeed, having been in this position, I’d like to see research buyers fund some of this validation). Finally, I’d like to see us worry less about how we get a seat at the table and more about how we do research that helps our companies or our clients sell more – that’s the purpose of most marketing research.

0

behavioral scienceeye trackinginnovationneuroscience

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

More from Steve Needel,

What If We Gave a Pandemic and Nobody Came?

What If We Gave a Pandemic and Nobody Came?

Since the start of COVID, marketing research blogs have been forecasting dire consequences for the industry. Enough already.

Steve Needel,

Steve Needel,

Data, Data, Everywhere And Not A Drop To Analyze

Research Methodologies

Data, Data, Everywhere And Not A Drop To Analyze

An argument for the importance of quality over quantity in data.

Steve Needel,

Steve Needel,

How to Model Data Incorrectly

Research Methodologies

How to Model Data Incorrectly

A rebuttal to the call to return to simple linear data models.

Steve Needel,

Steve Needel,

That’s a Fact, Jack

That’s a Fact, Jack

A rebuttal by Steve Needel defending the validity of significance testing.

Steve Needel,

Steve Needel,

ARTICLES

Moving Away from a Narcissistic Market Research Model

Research Methodologies

Moving Away from a Narcissistic Market Research Model

Why are we still measuring brand loyalty? It isn’t something that naturally comes up with consumers, who rarely think about brand first, if at all. Ma...

Devora Rogers

Devora Rogers

Chief Strategy Officer at Alter Agents

The Stepping Stones of Innovation: Navigating Failure and Empathy with Carol Fitzgerald
Natalie Pusch

Natalie Pusch

Senior Content Producer at Greenbook

Sign Up for
Updates

Get what matters, straight to your inbox.
Curated by top Insight Market experts.

67k+ subscribers

Weekly Newsletter

Greenbook Podcast

Webinars

Event Updates

I agree to receive emails with insights-related content from Greenbook. I understand that I can manage my email preferences or unsubscribe at any time and that Greenbook protects my privacy under the General Data Protection Regulation.*