May 8, 2019

That’s a Fact, Jack

A rebuttal by Steve Needel defending the validity of significance testing.

That’s a Fact, Jack
Steve Needel,

by Steve Needel,

0

Editor’s Note: Doing good, impactful market research is never easy, even in the best of circumstances.  On the client side, demonstrating ROI in order to justify budgets can be a challenge.  On the research company side, getting all the steps in the process aligned to leave you enough time to find the important insights is always a struggle.  Sometimes, it can be similarly difficult for the field as a whole to get the recognition it deserves.  That is why it can be so maddening when some of us engage in behaviors that hold the field back.  While there has been some breakthrough approaches and concepts in recent years that have been enormously impactful, there has also been some discouragingly sloppy thinking that reflects badly on us all.  Yesterday, Susan McDonald shared some thoughts on the impact of poor conceptual thinking and use of terms.  Today, Steve Needel goes after sloppy statistical thinking that has made its way into print.   While we try in these pages to focus on presenting the positive developments going on, we need to keep a watchful eye out for the flip side as well.


It seems like in the world of marketing research, the hits just keep on coming. And we don’t mean that in a good way. A whole new front has opened up for those who want to bash our industry (or make a name for doing so with a straw man they can quickly conquer). The new front is the latest furor regarding statistical testing. No surprise if you haven’t heard about this because let’s be honest, how many of us read Nature or the American Statistician?

Here is your 10-second summary: a lot of researchers don’t really understand p-levels and this leads them to make suspect conclusions about their findings. There – that wasn’t too painful, was it? Nor was it terribly shocking, I’m betting. Bob Lederer, in his March 25th commentary, makes a little more of this than I think is there. But his concern and the authors in the above journals concern is nothing compared to the consternation on April 23, 2019, Marketing Research Society’s newsletter. In Research Live, Jack Miles, of Northstar Research, goes off on a Trumpian diatribe regarding significance testing; more from Jack in a moment.

Drs. Amrhein, Greenland, and McShane, in their Nature commentary this March, worry about the tendency to classify results as either statistically significant or non-significant and the effect that has on our understanding of the world. The world is not dichotomous and nor are research results, they argue, yet we over-value significant findings and under-value non-significant findings. Marketing research is just as guilty of this as any other discipline – we all know researchers who are focused on what is significant rather than what is meaningful. The authors of the Nature paper have some suggestions, mostly involving being smarter and not so anal-retentive about significance levels – it’s a paper worth reading.

Now back to Jack. He would like us to believe that significance testing is, in his words, “insignificant and irrelevant to modern marketing research”. Here’s Jack’s argument:

  • Marketing today “promotes innovation, boldness, risk, and embraces failure”.
  • Significance testing is antithetical to these characteristics.
  • Because many statistical procedures were developed in the 1920s, they can’t possibly be useful anymore.
  • Research with a sample size greater than 5 is useless; there’s no more ROI with a larger sample.
  • Because statistical significance doesn’t tell you if something is profitable or not, it can’t be useful, because the only metric that really matters in marketing is profits.

There are deficiencies in his points, rendering his arguments weak at best:

  • He rants about R.A. Fisher, the “father” of stat-testing, accusing him of belonging to the medical profession, which he thinks is a less than a shining example of what marketing is about today. Oops – Fisher was a mathematician, not a medical professional, although some of his work was in biogenetics. And that he developed statistical theory in the 1920s makes it no less valid or useful today. Most of us went to a real university and read a book or two – we learned to make books in the 15th century. If an old technology works, use it. Sometimes you just need a t-test.
  • Lots of people (including my company) do research on the profitability of a given product or marketing initiative. I’m betting Northstar has done some of this too. We all test a new idea’s success against the current approach or against alternative options and provide statistical testing of the outcome. I’m willing to grant that they may not do this type of research in the UK, but I’d bet a paycheck or two that they do.
  • Continuing his attack on stat-testing, Jack references Jake Knapp (who is late of Google Ventures – he hasn’t worked there in a couple of years). Mr. Knapp never said research with n=5 is sufficient. He points to an article by the Nielsen Norman Group which shows that usability research with more than 5 subjects doesn’t yield much better data. That’s not marketing research in general that Jake is talking about – it’s a very specific form of research for a very specific purpose.
  • Jack tries to give Knapp credibility by noting that Google Ventures is a $2.4 billion company. It’s not that big because of Jake Knapp, but because Alphabet is a $739 billion company and needs to dump some cash. We are talking peanuts here on the bottom line.

Why do I pick on Jack? It’s simple – if he had just written this opinion and tucked it away on Northstar’s website, I probably wouldn’t have noticed and certainly wouldn’t have cared. But it’s published by a respectable research organization who, because it’s an opinion piece, let fact-checking fall by the wayside. We see this way too much today in our politics (hello Brexit – meet Donald Trump) and it has no place in an industry devoted to truth.

Marketing today does not promote innovation, boldness, risk and embrace failure, or at least any more than they have in the past. It’s a poor carpenter who blames his tools and Jack shouldn’t be blaming statistical testing for the industry’s shortcomings. That said, we shouldn’t blindly follow dictates about levels of statistical significance either. Stat testing is a tool that tells us how much confidence we should have in a research finding. Treating it as something more is never a good idea.

 

References:

Amrhein, V., Greenland, S. and McShane, B. “Retire statistical significance”. Nature, 21 March 2019, pp 305-307.

Miles, J. “Significance testing is insignificant to modern marketing”. www.researchlive.com , 23 April 2019.

0

data science

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

More from Steve Needel,

What If We Gave a Pandemic and Nobody Came?

What If We Gave a Pandemic and Nobody Came?

Since the start of COVID, marketing research blogs have been forecasting dire consequences for the industry. Enough already.

Steve Needel,

Steve Needel,

Data, Data, Everywhere And Not A Drop To Analyze

Research Methodologies

Data, Data, Everywhere And Not A Drop To Analyze

An argument for the importance of quality over quantity in data.

Steve Needel,

Steve Needel,

How to Model Data Incorrectly

Research Methodologies

How to Model Data Incorrectly

A rebuttal to the call to return to simple linear data models.

Steve Needel,

Steve Needel,

An Insight By Any Other Name

Research Methodologies

An Insight By Any Other Name

Re-evaluate how you see consumer insights.

Steve Needel,

Steve Needel,

ARTICLES

Moving Away from a Narcissistic Market Research Model

Research Methodologies

Moving Away from a Narcissistic Market Research Model

Why are we still measuring brand loyalty? It isn’t something that naturally comes up with consumers, who rarely think about brand first, if at all. Ma...

Devora Rogers

Devora Rogers

Chief Strategy Officer at Alter Agents

The Stepping Stones of Innovation: Navigating Failure and Empathy with Carol Fitzgerald
Natalie Pusch

Natalie Pusch

Senior Content Producer at Greenbook

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers

Weekly Newsletter

Greenbook Podcast

Webinars

Event Updates

I agree to receive emails with insights-related content from Greenbook. I understand that I can manage my email preferences or unsubscribe at any time and that Greenbook protects my privacy under the General Data Protection Regulation.*