Research Methodologies

October 17, 2019

The Uses and Abuses of Stat Testing Marketing Data

Joel Rubinson details the great risks if we abandon the use of stat testing of marketing data.

The Uses and Abuses of Stat Testing Marketing Data
Joel Rubinson

by Joel Rubinson

President at Rubinson Partners Inc

0

Editor’s Note: There has been a lot of discussions online in recent weeks about an article in Nature that argued for eliminating the use of statistical significance.  Needless to say, while some of the online discussion was sympathetic to the authors, it also aroused vehement disagreement among many.  My friend Joel Rubinson recently posted some thoughtful comments about the original article in his personal blog, and with his kind permission, we reprint it below.


There was an attention-grabbing paper published recently in Nature magazine and endorsed by hundreds of academics.  Bob Lederer devoted a video to it which is how I learned about it.

The premise: we should abandon stat testing and embrace uncertainty!

Abandoning stat testing altogether is a bad idea for reasons I will explain. Yet, the way researchers use stat testing needs to become more thoughtful, less auto-pilot.

Stat testing marketing data has never been about immutable laws of statistics…it is a means to an end… objective and simplified rules for marketing decision-making in a practical world.  It also is a key tentpole for methodologists designing sample sizes for marketing studies and experiments.

So, when does stat testing go wrong?

High Confidence Intervals Can Be Paradoxical

Setting ultra-high confidence levels sound like they are intended to reduce business risk when they actually can have the opposite effect… increasing business risk.

Consider how high confidence levels can lead to product denigration. Imagine a margin improvement formula change to an existing brand.  Null hypothesis: performance difference is not noticeable. When tested at the 95% confidence level, it might take 10% or more reduction in product satisfaction to be statistically significant. But isn’t a 5% denigration enough cause for concern? The paradox is that setting the confidence interval too high can INCREASE the risk that a poorer performing product formulation will make it to market and harm brand equity.

There are times when high confidence (95% even 99%) might be appropriate, e.g. for legal issues such as claims substantiation or safety testing of medications.  However, when clients mandate the same rigid and extreme confidence intervals for marketing decisions (“we make decisions based on 95% confidence intervals…”) it leads to inaction. Advertising and new products are notoriously hit or miss…half of ad campaigns fail to show acceptable ROI and 80% of new products fail; isn’t 80% certainty enough evidence you might have something better?

Study Design

Stat testing is also needed by methodologists because it directs the design of a study. It tells us how big sample sizes need to be to detect a difference between options that would be meaningful to the business and to be able to call that difference statistically significant.  However, if sample sizes are TOO big, they become dysfunctional…small, irrelevant differences become statistically significant and also the studies cost much more than they should. Yes, studies based on samples sizes that are too big are just as bad as sample sizes that are too small!

What is Your Real Sample Size

Now, let’s dig a little deeper…when you weight data, are you aware there is something called a RIM efficiency statistic that says the effective sample size (for stat testing) is less than the nominal sample size?  In other words, if you weight data on demos and your raw sample is not reflective of target population demos, the degree of imbalance affects statistical precision.  I have seen RIM efficiency statistics as low as 25%…meaning the sample was so bad that it was only worth 25% of the nominal sample size in terms of stat testing. Are you considering this? Not setting quotas for interviewing and not capping weights (its own artform) to save money might be more expensive all things considered.

Don’t Forget the Problem of Bias

…A different kind of statistical problem. In the world of media effectiveness measurement, we often create a virtual control cell to match the exposed cell by “twinning” on demos.  That DOES NOT ensure the control cell is really balanced to the exposed cell.  It doesn’t matter how big your sample sizes are…if you do not balance test vs. control on the pre-existing propensity to convert for the brand, you will have an estimate that is BIASED UPWARD of the lift in conversion.  This is not a statistical confidence issue but a bias issue…a silent killer.

The Research Methodologist

The research methodologist is the guardian of best practices, artfully applied. Sadly, it is becoming a lost profession, but you better have one. The methodologist sees all of these issues…sample size, stat testing, data weighting, ask the right questions the right way, like a Rubik’s cube that they repeatedly can solve.  When I was at the NPD Group for 25 years someone named John Caspers served this role so well we called him Yoda. His studies always produced the best damn survey data I ever saw.

Yes, stat testing is often applied without much thought, or incorrectly (forgetting RIM efficiency).  You could even argue the basics aren’t in place because most marketing data is based on some form of convenience sampling. But let’s remember that stat testing done right is an essential aid to fact-based and streamlined marketing decision making.  Without stat testing, we would degenerate into a polarized debate based on ungrounded opinions.  Don’t we have enough of that now from politics?

0

biasdata science

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

More from Joel Rubinson

Are You Using Synthetic Data for Analytics?

LevelUP Your Research

Are You Using Synthetic Data for Analytics?

Explore the use of synthetic data to bridge the gap between sales and ad exposure data. Learn how it can enhance targeting and validate ad effectivene...

Joel Rubinson

Joel Rubinson

President at Rubinson Partners Inc

How Baseball Led Me to Marketing Analytics

LevelUP Your Research

How Baseball Led Me to Marketing Analytics

Discover the power of Moneyballing marketing and revolutionize your outcomes by leveraging math over judgment for superior results.

Joel Rubinson

Joel Rubinson

President at Rubinson Partners Inc

Brand Differentiation: A Reality the Dirichlet Model Can’t Account For

LevelUP Your Research

Brand Differentiation: A Reality the Dirichlet Model Can’t Account For

Reevaluate marketing strategies and research approaches by challenging the Ehrenberg Bass Institute's claim on brand differentiation.

Joel Rubinson

Joel Rubinson

President at Rubinson Partners Inc

How Can You Tell If Your Advertising is Working?

LevelUP Your Research

How Can You Tell If Your Advertising is Working?

Learn how to increase measurement sensitivity with five principles of ad effectiveness measurement. Don't let flawed approaches hinder advertising suc...

Joel Rubinson

Joel Rubinson

President at Rubinson Partners Inc

ARTICLES

Moving Away from a Narcissistic Market Research Model

Research Methodologies

Moving Away from a Narcissistic Market Research Model

Why are we still measuring brand loyalty? It isn’t something that naturally comes up with consumers, who rarely think about brand first, if at all. Ma...

Devora Rogers

Devora Rogers

Chief Strategy Officer at Alter Agents

The Stepping Stones of Innovation: Navigating Failure and Empathy with Carol Fitzgerald
Natalie Pusch

Natalie Pusch

Senior Content Producer at Greenbook

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers

Weekly Newsletter

Greenbook Podcast

Webinars

Event Updates

I agree to receive emails with insights-related content from Greenbook. I understand that I can manage my email preferences or unsubscribe at any time and that Greenbook protects my privacy under the General Data Protection Regulation.*