Editor’s Note: Steve writes pointedly about a problem I’ve seen throughout my market research career – the misuse and downright abuse of concepts and methods from academic Psychology. Suppliers sometimes fall too much in love with a methodology, and succumb to the “I‘ve got a hammer, where’s a nail?” fallacy. Clients can’t be experts in everything, and are sometimes are misled by glib jargon and simplistic case examples. More clients need to ask the hard questions: Can you show me real quantitative validations published in academic journals? What are the limitations (all methods have them)? Can you show me a case study that led to better business results than the more standard methods have? If they can’t answer these, buyer beware!
I am often amused and amazed (sometimes dazed and confused) when we try to take an idea from academia and turn it into a business model for marketing research. I’m amused because we usually do it badly and I’m amazed because people actually spend money on it. Many things that have the “neuro” prefix fit into this category. Reading Kahneman’s Thinking Fast and Slow is another – has any piece of writing ever spawned more bad research tools?
So I was hoping to be wowed by a webinar I saw the other day, but instead, I was amused and amazed (and not in a good way). The webinar focused on Implicit Association Testing (IAT), which relies on an interesting if not erratic social psychological phenomenon. If you pair an attitude object with an adjective to which it matches, people can tell you that adjective is good or bad faster than if you pair the object with an adjective that doesn’t match. Here’s an example: assume you like Delta Air Lines and hate United. If I show you a series of adjectives such as friendly, efficient, pleasant, and roomy, while showing you the Delta logo, and I ask you to tell me quickly whether those are positive or negative adjectives, you’ll do it faster for Delta than for United. Similarly, if I show you adjectives like obnoxious, slow, angry, and cramped, you’ll tell me those are bad faster for United than for Delta. When the attitude object is congruent with the adjective, we are faster at correctly assigning the adjective as positive or negative. It’s really a very simple finding, although there is a lot of discussion about the psychological mechanism underlying it.
The appeal of this procedure for eliciting attitudes towards an object is the unobtrusiveness – we are never asking whether or how much you like Coke or Pepsi. We are never asking you to rate Coke and Pepsi on a set of rating scales. We are not asking you to compare Coke and Pepsi. We are simply seeing which adjectives are congruent with our attitude object by capturing the automatic reaction to the pairing of those adjectives and objects. This would seem to get around self-presentation issues, social desirability issues, interviewer bias, and all those other nasty things that go on with surveys.
It would… if it worked. But the originators of this process as a tool for attitude research (Russ Fazio and Anthony Greenwald) have published a number of caveats. A few of these are worth noting:
- This process only tends to work when the attitudes are strongly held; weak attitudes don’t respond to the implicit association process. In practical terms, if I like Angel Soft, Charmin, and Cottonelle but I don’t care that much, we won’t see a response difference in my data when we look at congruent vs. incongruent adjectives – that’s bad.
- The scientific basis for implicit association assumes you have an attitude towards the object. For new products or new packaging, you would not have an attitude. You may have an opinion, but it would neither be well-formed nor automatic. This makes the results questionable for concept tests, packaging tests, and new product tests.
- This process is extremely sensitive to how long you show the stimulus. The preferred time seems to be a combined 300 milliseconds for the object and the adjective, perhaps because Fazio showed that the effect disappears if we extend the time to 1000 milliseconds (when you have time to think about it).
- Reliability is an issue with this technique – there’s not a strong relationship in a test-retest situation. Greenwald’s meta-analysis calls reliability at a median r=0.56. Other researchers have found a much lower reliability coefficient. Keep in mind that r=0.70 is the accepted cutoff for reliability coefficients.
- In academically published consumer behavior studies, the predictive validity of implicit association tests runs just under 20%. In the same studies, measuring explicit attitudes correlates just under 50%, consistent with other research on the relationship between purchase intent scores and actual purchasing.
There are controversies about a number of other aspects of implicit association testing but no definitive answers. The technique seems to be methodologically robust – you can execute in a number of ways with a number of different types of stimuli and come up with the same answer (the presentation speed notwithstanding). The task is easy to do from a respondent point of view. You can collect a lot of data quickly and relatively cheaply, according to the webinar. It appears to work well when you have an established product where people may be reluctant or are not capable of telling you what they really think.
There is a number of research tools on the market today which claim some basis in science, usually Psychology. Quite often the vendors have cherry-picked the studies that support the tool, not the ones that would contra-indicate or mitigate its use, and foist this pseudoscience on unsuspecting buyers. I would suggest you dig into these tools deeply, asking a lot of questions, before running off and using them. When it comes to many of these tools, including implicit association tests, there is much less science behind this than the sellers are admitting.