It’s New And It’s Shiny – – So What?

The rallying cry "Innovate or Die" spurs interest in new and shiny things. Here's the problem - there's no validation of this statement.

C-Suite folk and their minions, the marketing team, have always had a fascination with new and shiny things. This is especially true, it seems, when they look at marketing research technologies. A couple of years ago, I gave a talk at the MRMW/ US conference on the misuses and misrepresentations of eye-tracking, neuromarketing, and facial recognition. At MRIA and ESOMAR last year, I questioned whether Big Data is as big an issue as we seem to be making it. My goal in each of these talks was not to try and kill off these new tools. What I wanted to do was provide some guidelines for how we as an industry should be evaluating these new ideas. As innovation is the theme of a recent conference in Amsterdam and an upcoming one in Atlanta, it might be time to further explore these guidelines; especially because that’s what Lenny pays me to do (okay – he really doesn’t pay me much – an occasional promise of lunch).

The rallying cry that spurs this interest in new and shiny things is “Innovate or Die”. Here’s the problem – there’s no validation of this statement. While self-styled authorities such as Tom Peters have been saying “Innovate or Die” for years, Getz and Robinson (2003) actually studied the question of whether innovation leads to a long and happy business life. Their conclusion is simple – it’s not innovation that makes a company healthy, it’s the system they have for improving products. They point to Xerox, which may have been the most innovative company in the world in the 60s and 70s, and Alcatel, who was in the top three for communications technology in the 90s, as two of the great disasters of failed innovations. Their problem, according to the authors, was a set of innovations that nobody wanted.

Our tendency in marketing research is to take any new technology, hype its value beyond its original intent, then backpedal when we find out the emperor didn’t have as large a wardrobe as we thought. Neuromarketing has gone through this pattern in the last two years, with the extravagant claims of the charlatans largely disavowed by most in the business, even by those selling that stuff. We went through this last year with Big Data, when much of what was being promised turned out to be Big Hot Air. In both cases, we’re getting down to the real work of determining what we can learn from neuroscience and the analysis of large, semi-structured data sets and the perimeters around those tools. The problem remains though; marketers and executives seem to believe these are critical factors in their business and when we can’t deliver, it’s the research industry that takes the hit.

You can’t blame the executives for this [much]. They shouldn’t be expected to understand the intricacies of research in the same way we researchers often don’t understand the rules of accounting, the science behind logistics, or the legalities of human resource management.  Their job is to make decisions for improving their business; our job is to give them information that will inform those decisions. That information may be situational, as in, “here is where the marketplace is and where we are in that marketplace”; it may be proactive, as in, “we’ve identified an opportunity”; or it may be reactive, as in, “we’ve tested this marketing idea and here’s what we can expect consumers to do”.

So where’s the disconnect? Mostly it comes from the perception that we don’t do these things as well as they need or in a time frame that fits their perceived needs. This opens the door for anyone who promises to fill those needs, whether they can or not.  And this is where innovation often goes wrong and gives marketing research a bad name. It may be trite, but innovation needs to produce methodologies that are faster, better, or cheaper. Faster and cheaper need to be at least as good as what was there before.  Better needs to be demonstrably better, not just theoretically better. This is where I think we have missed the boat – we all too often sell what’s new and shiny rather than selling a better tool.

What makes a better tool? That’s easy – one that more accurately predicts what shoppers/consumers will do. The promise that many (not all) suppliers in the eye-tracking, neuromarketing, facial recognition, mobile research, and Big Data space make is that they have the technology to tell you what people will really do. The problem is mostly it’s not true. There is no data that says more attention or longer attention to a product on the shelf improves the probability of purchasing. There is little data to suggest that neurological or physiological patterns are better predictors of purchasing than well constructed survey or experimental techniques. The basic tenets of facial recognition are now coming under attack – they may not be as universal as we once thought and again, nothing has been published to say it is a better tool.

Just being new, just being different, just being sexy with lots of sizzle sells; it certainly has in the past and probably will in the future. But if MR is really swirling down the drain, then we may be cutting off our noses to spite our faces. Selling cotton-candy techniques is only hurting us. Considering them to be innovative just because they are new and shiny misleads those who we most need to trust us.  We can do a better job and we must do a better job of innovating.

Please share...

7 responses to “It’s New And It’s Shiny – – So What?

  1. Interesting piece. The hard part with innovation is not developing the tool. It is creating a solution around the tool that creates value. I believe that to be the primary disconnect. This is not unique to market research. It is the case in most b2b technology businesses. Unless tool providers become solution providers, we will continue to see the disconnect between cool new technology and solving real problems for customers.

  2. Very good article.It seems like many want a MR solution that is not going to involve having to talk to people. Human nature is complex. Many of these new techniques avoid human interaction. There are new MR techniques that are genuinely helpful in reaching understanding. However, much of what is trendy is more sales than true research innovation. Love the Fishburne cartoon.

  3. Tantalizingly curmudgeonly, but I think we can dismiss the straw man pretty easily. It’s not all or nothing. No company can survive by only innovating, but no company can survive without some innovating. In organizational theory, they call it the exploration-exploitation dilemma. Too much exploration and you don’t adequately exploit the knowledge you already have, too little exploration and you become prey to disruptions you didn’t even know were there. The balance of how many resources to devote to explore vs. exploit is constantly in flux. The most adaptive companies (and species) are always adjusting — exploiting what they’ve learned, but also devoting some proportion of their resources to learning new things, even if they may not seem practical at the moment. Most companies die not from over-innovation, but from over-exploitation, forgetting the capacity to learn new tricks because the old tricks just seem to be working so darn well.
    Why do innovators buy shiny new stuff? It’s not because they’re idiots. It’s because they can afford to experiment, and they are looking for competitive advantage. They know that most of what the innovative vendors tell them is BS. They expect that. But mainstream buyers won’t put up with that sort of thing. They buy research for competitive parity, not advantage. They want solid proof points, so they lean toward buying what they’ve bought in the past. Every new technology has an adoption life cycle, with different types of buyers wanting different things at different times, as Geoffrey Moore taught us in “Crossing the Chasm” and later books.
    Also, the notion of “a better tool” is trickier than it appears. Sure, prediction is great, but here’s the reality: in a social system as large and complex as a marketplace, predicting any outcome from any one factor (e.g., sales from an ad test) is just about impossible, whether your tool is shiny or not. So how do you pick a “better ad”? Sometimes a new tool can provide better diagnostic information, even if it’s not much more predictive of the ultimate outcome you want to achieve. For example, brain science metrics can reveal objectively whether ad A or ad B produced more emotional engagement. Same goes for attention grabbing, cognitive load, or memory activation. Knowing that memory was more engaged by ad A than ad B is useful diagnostic information that can be extremely helpful for making a choice (which is the real purpose of most testing, not predicting marketplace performance), and it is certainly less fraught with error than asking people which ad they liked more, which drew their attention more, or which was more memorable. So I wouldn’t be too hard on new and shiny, it helps us get out of the rut of old and dull, and that’s where we’re more likely to die.

  4. @Steve G – thanks for the thoughtful missive. I was not trying to set up a straw man; trying to say there are two opposing notions floating around – the new and shiny must be good notion and the senior executives not thinking highly about MR. One reason, not the only one, may be that they keep on being disappointed by new stuff. Certainly another related reason is that we often don’t do a great job of predicting.
    If we pretend to be a science, then your last point is the key – how do we define better tools. Of course it’s tricky – if science were easy, everyone could do it. Predicting sales from an ad test is not difficult – we did it for years at Adtel and Behaviorscan with a great deal of success. Yes, a tool may be better if it better measures something or gives you more diagnostic information, but if the phenomenon to be predicted is unrelated to what you measure, who cares? In the CPG world, there is very little evidence that emotional engagement means anything – this is not true in other industries. If emotional engagement is unrelated to CPG purchasing, then measuring emotional engagement better is a silly course of action and hardly a benefit.

Join the conversation