Data Quality, Privacy, and Ethics

September 22, 2015

First, Psychology Studies – Is #MRX Next?

What are the implications for market research of the results from the Reproducibility Project?

Zontziry Johnson

by Zontziry Johnson

0

scrutiny on research

 

By Zontziry Johnson

August 27, an article was published in the New York Times detailing the efforts of a team called the Reproducibility Project to replicate findings from psychology studies published in reputable journals (and by reputable, I’m referring to peer-reviewed journals like Science). In short, the results for a number of those studies could not be recreated, casting something of a pall of doubt on any study done by anyone in any field.

Should we really worry?

The first time I read through the article, I worried for anyone doing any type of research and trying to get it published. I’ve worked at a scientific research institution and am familiar with the various levels of trust-worthiness of scientific journals. There’s a reason studies take so long to be published in the most credible journals: they go through a rigorous peer-review process to be sure that the way the study was conducted followed sound scientific principles, like passing a scientific “sniff test,” if you will.

However, closer scrutiny made me wonder a bit about the way that the studies were being reproduced. This quote in particular bothered me: “…there could be differences in the design or context of the reproduced work that account for the different findings.” One such example cited was a study that was reproduced using women from the United States instead of women from Italy. In this study, the findings from the reproduced study were found to be weaker than in the original study; a closer look shows that cultural differences can certainly play a role in the findings.

What’s the real issue?

I think there are two real issues at play here. The first is a question on how we are talking about original studies. Are global inferences being made on studies focused on one particular culture? For example, in the study on how attractive women rated men based on their time of fertility which used a sample of women primarily from Italy, are generalizations being made without taking into account factors such as cultural biases? Recently, another study made headlines for finding that, as the headlines went, “Having children is one of the crappiest things that can happen to an adult.” An actual reading of the study showed first, the study was done regarding German parents’ experiences with parenthood, and not only that, it was looking at why German parents were more likely to have only one child, even if they were expecting to have two when they were first thinking about how many children they wanted to have. The idea explored was how supported parents were by their peers and families, and their perceptions of how the parenting experience would be. Those who didn’t have good support in place when they had their first child, and whose experiences didn’t work out as they had expected, were less likely to have a second child.

So, we need to stop generalizing results, misinterpreting them, and misrepresenting them when talking about them in the media – from well-known media outlets to our own blogs and social media shares.

Second, when a study is being reproduced, well, it should be reproduced, not approximately reproduced. I understand that doing such a thing will take significant time and effort and money – much like the original studies took, I’m sure. But in order to really be credible, you can’t say you’re going to recreate an apple and end up with a jicama instead (if you haven’t eaten a jicama, the texture and flavor is close to that of some apples), and then say the apple wasn’t an apple after all.

Implications for market research

What does this mean for the field of market research? I’ve been thinking about this since reading the NYT article a couple of weeks ago. Here are some of my conclusions.

  • Be sure we’re using sound methodology for our studies. Be up-front when reporting on the results, specifically identifying the sample used (again, cultural biases play a factor in results) and whether the results are representative of the population being studied. Remember to publish the sample size and the confidence interval for your results. I think in the current push for faster studies and visual reports, the rigor behind some of the research can be lost, and we can end up with poorly-run projects with misleading results.
  • When talking about other studies, be careful of making broad generalizations or misrepresenting the original data.
  • If something you see reported seems a bit outlandish, or very surprising, go check the original source of data.
  • Don’t just re-share a headline because the headline seemed interesting and because it’s gone viral. Read the source material. Too often, items are being reshared on social media or commented on by others without people taking the time to read the original source material. Conclusions are too often made based on others’ comments, not based on reading the original item that was shared.
  • Some studies in market research won’t be reproducible simply because in our research, we often are measuring changing perceptions among audiences. Based on a variety of factors – marketing campaigns, market influences, etc., those perceptions are likely to change, or have changed by the time the same study is conducted, even if it’s done among the same exact respondents in the original study. I don’t even think trackers could be reproduced for this very reason; they are typically tracking changes in an audience, from changes in satisfaction to changes in perception to changes in behavior.

In short, do good research and take the time to review claims before passing them along. Let’s be good stewards of our own and of others’ data.

0

biasdata quality

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

More from Zontziry Johnson

Research Methodologies

A Debate Between Survey Length and Data Quality

The next time you’re thinking of fitting an existing survey to a mobile experience, try starting fresh with a mobile-first approach.

Zontziry Johnson

Zontziry Johnson

When A Market Researcher Dreams

Zontziry Johnson shares some recent dreams she has had about what the field of market research could look like in the future.

Zontziry Johnson

Zontziry Johnson

Research Methodologies

While Some Things Change, Others Will Stay The Same

Innovation is becoming key in this industry. But what about basic best practices? In all of this, are those best practices changing?

Zontziry Johnson

Zontziry Johnson

ARTICLES

Moving Away from a Narcissistic Market Research Model

Research Methodologies

Moving Away from a Narcissistic Market Research Model

Why are we still measuring brand loyalty? It isn’t something that naturally comes up with consumers, who rarely think about brand first, if at all. Ma...

Devora Rogers

Devora Rogers

Chief Strategy Officer at Alter Agents

The Stepping Stones of Innovation: Navigating Failure and Empathy with Carol Fitzgerald
Natalie Pusch

Natalie Pusch

Senior Content Producer at Greenbook

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers

Weekly Newsletter

Greenbook Podcast

Webinars

Event Updates

I agree to receive emails with insights-related content from Greenbook. I understand that I can manage my email preferences or unsubscribe at any time and that Greenbook protects my privacy under the General Data Protection Regulation.*