The ultimate guide to transforming your business with business insights
Insights That Work

Is Eye-Tracking Making us Blind and Other Research Maladies

I raise the question of whether eye-tracking is making us blind, along with the related questions of whether neuroscience has made us stupid and have micro-expressions made us heartless. I'm not sure these techniques are what we think they are or what they purport to be.


By Steve Needel, PhD

I recently participated in an online discussion, one that is all too common these days, about the fate of marketing research; how did we get here? Why won’t they listen to me? nobody appreciates me – the usual. The tenor of the discussion made me think that if we were all Buddhists, our rate of self-immolation would be alarming. I am a believer that our fate and our corporate success as researchers are in our own hands, as I have previously written (2006, 2008). I raise the question of whether eye-tracking is making us blind, along with the related questions of whether neuroscience has made us stupid and have micro-expressions made us heartless. These research tools are getting a lot of buzz these days, much of it generated by their providers. I’m not sure these techniques are what we think they are and I am sure they are often not what they purport to be. We are seeing among marketers and marketing researchers an increasing lack of understanding and/or appreciation about the boundaries and appropriate uses of these new techniques – hence the purpose of this paper.

Let me start with a recent example from my own company. A client decided to use eye-tracking for a packaging test rather than go with us and test the new package in virtual reality. We thought this was strange because the client usually commissions us to define the sales impact of their package changes. In this case, their brief was very specific about this action standard. When I asked why they changed their decision criteria from sales to attention-getting, the response was that they thought they were the same thing. The research supplier they chose was happy to let them believe that.

Reading the published literature on eye-tracking (mostly by Drs. Wedel, Peters, and Chandon), there is no evidence of a relationship between attention and sales beyond the elementary “if you don’t see it, you can’t buy it”. This is literature put out both by academicians and the companies that sell eye-tracking research (for example, Perception Research Services). This doesn’t mean eye-tracking is a bad tool – it only means that it doesn’t tell you what will happen to your sales with one package or another. Eye-tracking was originally designed for print media, extended now to  webpage viewing analysis, and for package testing, where the idea is to understand what someone pays attention to on a package (and perhaps in what order they attend to the different elements). As such, it is a great tool for designers to see if what they are building is attracting the kind of attention they want to attract. Mobile eye-tracking lets us get out of the lab and into the real world. Whether shoppers are paying attention to displays or point-of-purchase material is interesting, although I would contend that the question is secondary to whether displays or P-O-P material sell more stuff. When we move into eye-tracking’s newer applications, such as shelf set research, the technique becomes irrelevant or, at best, redundant with other information we have. In virtual reality, for example, we have yet to see where eye-tracking adds any information over whether shoppers pick up and look at a product – it is non-differentiating. Eye-tracking should be used for what it was intended to do; extending its application may be misleading.

Neuromarketing is another area that gets much more attention and much more credit than it deserves, a point well made by Dr. Deanna Weissberg of Rutgers University. There is no published, replicable evidence, that increased neural activity in humans relates to much of anything when it comes to decisions or preferences. David Penn’s recent article in Research World a few months back makes it clear that the technique may be interesting, but that neuroscientists have yet to figure out what it means. The Advertising Research Foundation’s 2009 review of Innerscope finds support for using neuroscience to understand broad emotional response, but they also suggest that the “single-method” approach of FMRI or the EEG headbands are not going to be sufficient, in the same way that in the past GSR or pulse or respiration rate have not lead us to make great strides in understanding physiological responses to stimuli. Here’s where we can easily be led astray: we shouldn’t expect people to have emotional responses to most of the products we sell in the CPG world. It’s a breakfast cereal, after all, not a life-long commitment! You need to generate a pretty strong emotional response to show up above baseline neurological activity – commercials by their very definition might do this, but soft drinks? We don’t think so, and no published studies have validated the claims made by some of the promoters of neuroscience for marketing research. Indeed, recent claims by Martin Lindstrom regarding the neuropsychologically compelling nature of the IPhone have drawn criticism in the New York Times from leading neuroscientists (October 4, 2011). And don’t be swayed by the “emotion drives 95% of purchases” claims floating around – habit drives 95% of the purchases in the CPG world (or thereabouts).

A recent Linked-In article (posted by the research agency’s PR firm) is touting the use of micro-expressions to measure consumer’s emotional response to products. The rationale is that emotions, rather than logic, drive purchasing, according to the CEO of the supplier being quoted. And the discussion would have us believe that we as an industry are not sufficiently adept at eliciting emotions without expensive and time-consuming facial analysis. The writer states that, “the science is complex and the ability to analyze human responses is held by only a few trained individuals.” This, of course, would justify high prices for this research agency and gives it that mystique, that patina of “well it must be good if it’s expensive and time consuming and nobody else can do it” that many of us fall for all too often. There are so many talking points to bring up, I hardly know where to start. This is not a unique and highly specialized research tool – anyone can go on Paul Ekman’s website and get fully trained for about US$70 and he claims to be able to teach you facial recognition in about 40 minutes. Understanding micro-expressions does not have a well-defined protocol, in the sense that parsing emotions into the fine grain one would expect is not done in facial analysis; you are looking at seven basic emotions. We might ask whether, if a researcher can’t measure the difference between a positive and a negative reaction to a product or a concept quickly and inexpensively, should they even be in the marketing research business? Do we think that our research participants are lying to us at such a high rate that we need to be constantly on the lookout for deception? I would contend that if we don’t trust our participants, we shouldn’t be asking them questions. I do recognize that participants aren’t always good at telling us what they are thinking or feeling, but that is a problem with what and how we are asking them rather than they are lying to us. Micro-expressions made for a mediocre television show and have all the promise of being a mediocre marketing research technology.

In a recent NewMR radio show, I was asked why these new technologies appear to be so popular if there is little science behind them. By science, I mean a demonstrable, reliable phenomenon that is linked to something measurable related to marketing actions. Linking eye-tracking to sales, increased blood flow in the brain to a valenced emotion, linking physiological arousal to affect, or showing that micro-expressions significantly improves our understanding of a person’s response to a product or offer would all be good science; little to none of this exists. Popularity can be explained, in part, because of our inferiority complex; researchers remain so concerned with getting that seat at the table that the sizzle and sexiness of these new tools outweighs their lack of validation. Rather than earning that seat through helping our companies do what they do better, we try to look deep, cutting edge, insightful, all to justify our presence. The problem is, we often fail at that – it’s not easy being deep and insightful. When we don’t get the payout from these hyped technologies, the fall is that much harder. Fortunately for us, marketers may be even more susceptible to the allure of high tech than we are, meaning they’ll buy into the next idea that we bring up almost as quickly.

I’d like to see us use these tools in the appropriate situations for the appropriate types of research questions they were designed for. I’d like research buyers to demand some validation of the output (indeed, having been in this position, I’d like to see research buyers fund some of this validation). Finally, I’d like to see us worry less about how we get a seat at the table and more about how we do research that helps our companies or our clients sell more – that’s the purpose of most marketing research.

Please share...

3 responses to “Is Eye-Tracking Making us Blind and Other Research Maladies

  1. As the author mentioned my company by name, I do feel compelled to respond to several of his assertions.

    To begin with, Perception Research Services (PRS) introduced eye-tracking to the consumer research industry in the mid-1970s and we’ve literally conducted tens of thousands of custom studies since. So to be described as a new technology (and grouped with neuromarketing and micro-expressions, etc.) is quite misleading.

    But far more importantly, the author fundamentally misrepresents the role and value of eye-tracking on multiple levels. First, the “choice” between measuring sales impact and measuring visibility (“attention getting”) is a completely false one. Nearly all PRS studies include both eye-tracking (to gauge retail visibility) and shopping exercises from physical or virtual shelves/aisles (to gauge sales impact). The added-value of eye-tracking lies in uncovering the linkage between visibility and sales, which leads to a second point. Contrary to the author’s claims, there is a direct and proven linkage between retail visibility (attention) and sales, which has been proven by our clients – and in fact, is exactly the main point of the published academic literature he references (specifically, papers by Professor Chandon, which are available on our web site). We’ve seen this not only in packaging studies, but also in our shelf set/category management studies, in which planograms that increase visibility (for specific brands, sub-categories or product forms) lead consistently to higher sales for these products.

    While the author is correct in acknowledging the value of eye-tracking in gauging viewing patterns (for advertisements, packages, POS materials, etc.) and in understanding the shopping experience (via in-store/mobile eye-tracking), his dismissal of its value in gauging retail visibility (“attention getting”) – and his implication that visibility and sales are not related – is simply misguided (and perhaps self-serving). In addition, while we agree that using eye-tracking in isolation or as a sole action standard is unwise, we’d also argue that gathering sales impact/purchase patterns alone (without eye-tracking) limits one’s ability to explain results (to understand the “why” behind shopping behavior and purchase patterns).

    1. Thanks for the feedback Scott; I appreciate you making the time for a counter argument.

      I can’t speak to the particulars here one way or the other but in general I think the intention of the post was to highlight the need to understand the correct application of various approaches and technologies. Personally I see value in eye tracking, neuromonitoring, virtual shopping, facial analysis, etc… and believe they have immense possibilities for enhancing our understanding of consumer decision making. That said, I also believe we have to continue to be clear that despite the copious amounts of validation for these techniques we must continue to experiment and share information openly across the industry in order to uncover appropriate applications of them.

      As brands continue to shift their focus to understanding emotional and behavioral drivers, it is incumbent on the research community to work collaboratively to incorporate new advances (or older technologies in new ways) into our tool kit in order to deliver value to clients.

  2. Scott’s reply deserves a reply:

    I didn’t say eye-tracking was new – I said it was getting a lot of buzz; most of what I’m seeing is tied into shopper marketing issues. And if your published material shows a direct connection between attention and evaluation, I can’t find it. I just re-read your JM article with Chandon et. al. and nowhere does it show a clear and direct relationship between attention and brand sales. Indeed, the direct path coefficient isn’t in the paper. The correlations presented in this paper between attention and consideration or choice are trivially positive. And the lit review references your own book chapter (2007) in which you all state that “noting and reexamination are only weakly correlated with brand conisderation”.

    None of the studies you’ve published show that people are more likely to buy a product or buy more of it due to more attention, beyond the obvious “If I don’t see it I can’t buy it” and accepting your definition that “seeing it” means more than one fixation. We can get the same results without needing to resort to attention as a mediating mechanism.

    This may be why I get a lot of work following up on PRS’s work – you guys do a great job helping them design new packaging and they come to us to see if there is a short-term sales effect.

Join the conversation