By Paul Conner
Tom H.C. Anderson said it very well recently in a LinkedIn discussion group:
“Too often data is chosen for analysis because a vendor or boss tells you it is important rather than [supporting] a decision originating with the original research question to be solved. Arguably, nowhere is this currently more true than the pre-occupation with twitter analytics; chosen because it’s free and easy to get to, yet representative of less than 8% of the population and severely lacking in signal quality.”
— Tom H.C. Anderson in The Future of Market Research.
Our industry is currently experiencing outstanding new data collection and analysis technologies that allow us to practically explore what’s on consumers’ and shoppers’ minds and hearts, both consciously and non-consciously, and without being intrusive in our measurements. This particularly includes, as Tom Anderson notes, social media analytics, as well as techniques from neuroscience and social & cognitive psychology. This is wonderful technology that allows us the opportunity to provide our clients insights that reflect how consumers and shoppers make their decisions.
However, to Tom’s point, all of this new glitter can be blinding. How many of our clients, and how many of us, have become so enamored with the technologies that we forget that they are nothing more than new tools that help us make critical, yet fundamental, decisions that we have always made with the information at our disposal? Should we launch this product, or not? Who should we target? How should we promote our product? What should we charge? What channels should we use? How do we display our products in stores or online? The list goes on and on.
Most of the applications we face are not new. But even if some new applications have crept in recently (e.g., should we develop an APP for our product, or not?), the fundamental way in which we use the information at our disposal, whether it comes from self-reported accounts of how people feel about a product or facial coding that indicates how people feel about a product, should not be new. Namely, the information we collect should be specifically relevant to the decisions we have to make.
For example, if an application is to decide whether or not to pursue production of a new product, and if the criterion for making that decision is “pursue production if at least 10% of the general population is aware of the product category,” then why would one need to conduct a facial coding study to see how people feel about the product category? One may argue that how people feel about the product category should be a criterion for the decision, which would make facial coding a relevant method, but that misses the point. For whatever reason, awareness may be the lone criterion and, if so, facial coding should not be considered as a method.
Despite the logic above, new techniques are often chosen because they are new rather than because they address a specific, well-thought out, reasonable set of criteria. And in choosing a technique because it’s new, or leading-edge, or receiving substantial media coverage rather than because it’s relevant, how often has our research ended up in the “what do I do with these data” pile?
In The AIM Process: A Systematic, Stepwise Procedure for Improving the Actionability of Marketing Research, I deal with this issue. I call it the “purpose-process disconnect.” Marketing research is an applied discipline, the purpose of which is to support marketing decisions and actions. But too often marketing researchers and their clients start the research design process with a technique or an information-based question in mind, rather than the actual decisions and actions (a.k.a. applications) they have to make when the research is completed and the criteria upon which they will base those decisions and actions. In other words, much of the time applications are disconnected from the process.
From the book, here are some examples of requests I’ve received to start the research design process:
• “Hey Paul, I need some research to see how people like all the components of our new service. Can you write me up something on that?”
• “Hey Paul, we need some focus groups next week to see what people think of some names for our product. Can you set those up for me?”
• “Hey Paul, this weekend we need some research to find out if people are aware of our advertising campaign or not and we need to find out who is and who isn’t. I know it’s Thursday, but can you please get that done so that we can report to our CEO on Tuesday?”
What’s missing from each request are the specific decisions and actions to be taken on the basis of the research (applications) and the rules or guidelines upon which those decisions will be made or actions will be taken (applications criteria). Because of this, research requested and carried out in this way has a great chance to be unactionable.
Our new techniques tempt us to use them, and in so doing invite the purpose-process disconnect. They are wonderful techniques, but let’s take the time to choose them correctly. Together with our clients, let’s take the time to clearly define our applications and their criteria, admittedly a challenging endeavor to say the least. If the new techniques fit, then by all means, use them! And let the insights that follow justifiably reflect the glitter they bring to our industry.
For more information about The AIM Process: A Systematic, Stepwise Procedure for Improving the Actionability of Marketing Research visit http://www.theaimprocess.com. The book is available in print and Kindle on Amazon.com at http://amzn.to/1goYa74.