By Dr. Stephen Needel
The annual GRIT report should be required reading for marketing researchers; it provides a current snapshot of what senior players in our business are feeling about things. That said the reader should take a big grain of salt to go along with the red herring that winds its way through the report. The red herring is innovation, described as a holy grail with magical powers that should be self-evident. It is my contention that innovation for the sake of innovation is foolish. It only appears to be effective because so many buyers are enthralled with all things new and shiny, rarely digging beyond the surface validity of a lot of new techniques. Innovation needs to improve the predictability and reliability of our research – it does not have to be new, flashy, sexy, or disruptive.
The “innovate or die” movement persists, even though research has shown this to be untrue (e.g. Getz and Robinson, 2003). Yes, I know about the buggy whip company who didn’t see cars coming and Netflix changed the game much to Blockbuster’s consternation. Amazon brought us a new way to shop and one day their financial performance might justify what they do and how they do it – or not. Walmart’s latest challenge will certainly put a dent in Amazon’s growth estimates. Now take a look at all the “innovative” tools marketing researchers have developed. Are there any that blow you away?
In the GRIT report, John McGarr makes a critical point. He says, “MR providers need to keep in mind that no client owes the industry a trial of new methods for the sake of being innovative.” Some will argue that we should expect buyers to try new things when they are (sorry about this) faster, better, or cheaper. I’d take the position that faster is fine, as long as it is at least as good and has a similar value proposition. I’d take the position that better is always better, even if it is not as fast and not as cheap – you should pay more for a better answer. I’d take the position that cheaper alone almost always has a hidden cost, usually in bad design, sampling, or survey construction.
The marketing research industry needs innovation, but the innovation needs to be directed at solving research problems. Here’s an example – we have a problem with predicting new product performance, as good as many models are. When our new product failure rates are over [70%, 80%, 90% – pick your number] we clearly do not understand the shopper dynamics beyond basic trial and repeat analyses. When was the last time we saw a new or better way to do our new product forecasting that was validated?
Instead of solving research problems, we pretend that automation and DIY are the innovations most needed because they make the research process less expensive. These types of innovations are technical innovations, but not problem-solving innovations. We miss the point – it’s not the price tag that matters, it’s the quality of answers we are able to give our clients. Of course budget limitations play a part in purchasing decisions. But, and it’s a big but, a methodology that is meaningfully more accurate will always be worth the cost. Greg Archibald, in summing up the report, says, “Over the next few years, we are going to see a continued focus on improving tools and methodologies…” I’d respectfully disagree – I don’t think we, as an industry, are very focused on improving our tool kit but rather we are trying to come up with the next new sexy thing to sell.
Innovations such as automation and AI are great for the business of marketing research, but that only provides a trivial benefit for our end-client. They need us to do our job better, they will pay us to do that, and we need to innovate with that in mind.