Categories
Research Methodologies
May 7, 2018
Implicit Association Testing (IAT) has gained use in MR, but has limitations. Practitioners need to be held more accountable.
0
Editor’s Note: Steve writes pointedly about a problem I’ve seen throughout my market research career – the misuse and downright abuse of concepts and methods from academic Psychology. Suppliers sometimes fall too much in love with a methodology, and succumb to the “I‘ve got a hammer, where’s a nail?” fallacy. Clients can’t be experts in everything, and are sometimes are misled by glib jargon and simplistic case examples. More clients need to ask the hard questions: Can you show me real quantitative validations published in academic journals? What are the limitations (all methods have them)? Can you show me a case study that led to better business results than the more standard methods have? If they can’t answer these, buyer beware!
I am often amused and amazed (sometimes dazed and confused) when we try to take an idea from academia and turn it into a business model for marketing research. I’m amused because we usually do it badly and I’m amazed because people actually spend money on it. Many things that have the “neuro” prefix fit into this category. Reading Kahneman’s Thinking Fast and Slow is another – has any piece of writing ever spawned more bad research tools?
So I was hoping to be wowed by a webinar I saw the other day, but instead, I was amused and amazed (and not in a good way). The webinar focused on Implicit Association Testing (IAT), which relies on an interesting if not erratic social psychological phenomenon. If you pair an attitude object with an adjective to which it matches, people can tell you that adjective is good or bad faster than if you pair the object with an adjective that doesn’t match. Here’s an example: assume you like Delta Air Lines and hate United. If I show you a series of adjectives such as friendly, efficient, pleasant, and roomy, while showing you the Delta logo, and I ask you to tell me quickly whether those are positive or negative adjectives, you’ll do it faster for Delta than for United. Similarly, if I show you adjectives like obnoxious, slow, angry, and cramped, you’ll tell me those are bad faster for United than for Delta. When the attitude object is congruent with the adjective, we are faster at correctly assigning the adjective as positive or negative. It’s really a very simple finding, although there is a lot of discussion about the psychological mechanism underlying it.
The appeal of this procedure for eliciting attitudes towards an object is the unobtrusiveness – we are never asking whether or how much you like Coke or Pepsi. We are never asking you to rate Coke and Pepsi on a set of rating scales. We are not asking you to compare Coke and Pepsi. We are simply seeing which adjectives are congruent with our attitude object by capturing the automatic reaction to the pairing of those adjectives and objects. This would seem to get around self-presentation issues, social desirability issues, interviewer bias, and all those other nasty things that go on with surveys.
It would… if it worked. But the originators of this process as a tool for attitude research (Russ Fazio and Anthony Greenwald) have published a number of caveats. A few of these are worth noting:
There are controversies about a number of other aspects of implicit association testing but no definitive answers. The technique seems to be methodologically robust – you can execute in a number of ways with a number of different types of stimuli and come up with the same answer (the presentation speed notwithstanding). The task is easy to do from a respondent point of view. You can collect a lot of data quickly and relatively cheaply, according to the webinar. It appears to work well when you have an established product where people may be reluctant or are not capable of telling you what they really think.
There is a number of research tools on the market today which claim some basis in science, usually Psychology. Quite often the vendors have cherry-picked the studies that support the tool, not the ones that would contra-indicate or mitigate its use, and foist this pseudoscience on unsuspecting buyers. I would suggest you dig into these tools deeply, asking a lot of questions, before running off and using them. When it comes to many of these tools, including implicit association tests, there is much less science behind this than the sellers are admitting.
Disclaimer
The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.
Comments
Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.
More from Steve Needel,
Since the start of COVID, marketing research blogs have been forecasting dire consequences for the industry. Enough already.
An argument for the importance of quality over quantity in data.
A rebuttal to the call to return to simple linear data models.
A rebuttal by Steve Needel defending the validity of significance testing.
Top in Quantitative Research
Why are we still measuring brand loyalty? It isn’t something that naturally comes up with consumers, who rarely think about brand first, if at all. Ma...
Sign Up for
Updates
Get what matters, straight to your inbox.
Curated by top Insight Market experts.
67k+ subscribers