PureSpectrum - Schedule A Demo
Our new GreenBook Directory site is live!
COVID-19 guidance, tips, analysis - access full coverage here

Separating Neuroscience from NeuroHype

Many science writers appear to me to have had little formal education in research methods and statistics, and this appears when an article headline does not match the body of the article or when the writer interprets the original paper or papers being cited inaccurately.

Courtesy of
Courtesy of



By Kevin Gray

As everyone knows, we only use ten percent of our brains, right-brained people are more creative and pregnant women lose control of their minds.

Except that what everyone knows is probably wrong according to Christian Jarrett.  Jarrett is a contributor to Wired, but not merely a correspondent who writes well about technical matters.  And, make no mistake about it, he writes very well and has recently published a very enlightening and entertaining book entitled Great Myths of the Brain.  Importantly, Jarrett also writes as an expert since he holds a PhD in Cognitive Neuroscience and is editor of the British Psychological Society’s Research Digest.

Being a neuroscientist himself, Jarrett does not think that neuroscience is bunk.  What he does believe is that much of what has been written or said about it in the popular media is bunk.  There is a difference:

We’ve made great strides in our understanding of the brain, yet huge mysteries remain.  They say a little knowledge can be a dangerous thing and it is in the context of this excitement and ignorance that brain myths have thrived…. Salesmen are capitalizing on the fashion for brain science by placing the neuro prefix in front of any activity you can think of…

Not surprisingly, Jarrett is concerned about backlash:

With all the hype and mythology that swirls around the brain, the risk is that people will become disillusioned with neuroscience for failing to provide the revolution in human understanding that many have heralded…There seems to be a rising mood of skepticism, weariness with the clichéd media coverage of new results, and a growing recognition that neuroscience complements psychology, it can’t possibly replace it.  But let’s remember too that neuroscience is its infancy.  We are unearthing new findings at an astonishing rate, many of which are already helping people with devastating brain disorders.

A few of the numerous other popular misconceptions and overstatements he covers in the book include:

  • The brain is a computer
  • Adults can’t grow new brain cells
  • The female brain is more balanced
  • Neuroscience is transforming human self-understanding
  • Brain training will make you smart
  • Brain food will make you even smarter

Jarrett briefly touches on neuromarketing and thinks it holds promise, though he feels there has been a lot of nonsense written about it too.  (In the interest of disclosure, this is also my opinion.)  He points out that many neuromarketing claims appear in newspaper articles or magazines, rather than in peer-reviewed scientific journals and calls for more rigor and balance:

Although it’s early days, and there’s been an inordinate amount of hype, there are ways in which brain scanning techniques could complement the traditional marketing armamentarium.

As an illustration, he describes a hypothetical new food product that is a hit with consumers taking part in a conventional taste test: “Brain scanning and other physiological measures could potentially identify what makes this product distinct…”

Scientific controversies rarely seem to boil down to the simple dichotomy that a theory has been either conclusively proven or conclusively disproven, and even in the hardest of hard sciences there are large grey regions which force us to examine the balance of the evidence in order to come to a sensible judgment.  A sound conclusion, in fact, may be to suspend judgment, and Jarrett is very good at giving balanced appraisals of the evidence regarding the various issues he examines and makes his opinion clear that there often is at least a grain of truth in many myths.

Exaggerations and basic misunderstandings, such as confusing statistically significant with consequential, can spread like wildfire throughout the blogosphere and even respected news media, unfortunately, and what is merely appealing conjecture can be quickly “established” as fact through sheer repetition.  However, claims regarding potential are not proven facts, and I’d urge us all to watch the pea under the thimble when hearing or reading about any breathtaking new claim that purports to be grounded in science. Marketing researchers should in theory be better than most at spotting these stampedes of misinformation but we too can easily fall victim to a herd mentality; just because we’re marketing researchers, doesn’t mean we’ve been immunized against embellishments and outright deceptions.

So how do we, as laypersons with respect to neuroscience, separate the wheat from the baloney?  Jarrett lists six guidelines, which he returns to frequently in the book:

  1. Look out for gratuitous neuro references: “Just because someone mentions the brain it doesn’t necessarily make their argument more valid.”

  2. Look for conflicts of interest: “Look for independent opinion from experts who don’t have a vested interest. And check whether brain claims are backed by quality peer-reviewed evidence.”

  3. Watch out for grandiose claims: “Sound too good to be true? If it does, it probably is.”

  4. Beware of seductive metaphors: “We’d all like to have balance and calm in our lives…”

  5. Learn to recognize quality research: “Ignore spin and take first-hand testimonials with a pinch of salt. When it comes to testing the efficacy of brain-based interventions, the gold standard is the randomized, double-blind, placebo-controlled trial…The most robust evidence to look for in relation to brain claims is the meta analysis…”

  6. Recognize the difference between causation and correlation: “The causal direction could run the other way (people with a larger Y like to do activity X), or some other factor might influence both X and Y.”

Jarrett’s tips are sound advice for evaluating science in general and I believe most also apply to new marketing research techniques.

To elaborate a bit further, if someone claims that something has been “proven” on the basis of a single unreplicated study, in my opinion, they have only proven themselves suspect.  Even a large-scale meta analysis that has been well-conducted and taken study heterogeneity into account probably will not clinch it; that may require multiple, independent meta-analyses.1  Publication bias is another concern.  This is a lengthy matter but essentially refers to the fact that many studies go unpublished, not because of poor quality, but because no statistically significant effects were detected.  A negative finding, however, is just as important as one achieving statistical significance.

Many science writers appear to me to have had little formal education in research methods and statistics, and this is noticeable when an article headline does not match the body of the article or when the writer interprets the original paper or papers being cited inaccurately or selectively.  Many news accounts on any subject are oversimplified, in my opinion, but neuroscience may fall victim to this more often than most because it is inherently so interesting and mystifying and, at the same time, so scary and complex.  “The brain is always busy, whether it’s engaged in an experimenter task or not, so there are endless fluctuations throughout the entire organ.  Increased activity is also ambiguous – it can be a sign of increased inhibition, not just excitation.  And people’s brains differ in their behavior from one day to the next, from one minute to the next.  A cognition-brain correlation in one situation doesn’t guarantee it will exist in another.  Additionally, each brain is unique – my brain doesn’t behave in exactly the same way as yours.”  One wouldn’t have guessed any of this from a typical mass media article!

It’s also wise to be wary of “new evidence” when we hear “rebuttals” such as “Well, that may be true, but new evidences shows…”, which would seem to suggest that the old evidence cited until now wasn’t actually credible.  So, therefore, we should be swayed by the new evidence?  Cherry-picked results are a pet peeve of statisticians and a favorite tool of charlatans of all sorts, as are fancy visualizations, which can be cunningly used to mask thin substance.  To be clear, I am not pointing the finger at neuroscience as a special case, since these sorts of tactics can be employed whenever science is invoked as a reason to trust a claim.  (I offer a few more thoughts on what we should be on guard against in

I am not a neuroscientist myself, of course, and I think the first step for anyone who wants to learn more about any scientific or technical matter outside one’s own areas of expertise is to find out who the real authorities are and what they are really saying or writing about the subject.  Judging from assorted reading over several years, Jarrett’s views seem to me to be quite representative of mainstream neuroscience (which for the most part has nothing at all to do with marketing).

For readers wishing to dig more deeply into neuroscience, there are many sources.  Bob Garrett, a Visiting Scholar at California Polytechnic State University, has co-authored Brain & Behavior: An Introduction to Biological Psychology, a popular textbook now in its fourth edition.  Many marketing scientists will be familiar with the Journal of the American Statistical Association (JASA), a quarterly publication that has featured many technical papers on neuroscience subjects over the years.  In particular, JASA is an excellent source on measurement challenges facing the field that more often than not go unmentioned in popular media accounts.  The British Psychological Society’s Research Digest can be found here and Jarrett’s Wired blog here  In addition, there are several skeptical blogs Jarrett cites that you may wish to have a look at.2



1 “Meta-analysis” does not simply mean that a reviewer has examined more than one study.  It is a set of procedures for statistically synthesizing the results of several primary studies.  Like any methodology, meta-analysis can be misused or abused.  There are many online sources about it and I can also recommend Methods of Meta-Analysis (Schmidt and Hunter) and Introduction to Meta-Analysis (Borenstein et al.), two frequently-cited textbooks on this topic.

2 These five blogs are mentioned specifically:;;;; and

Please share...

5 responses to “Separating Neuroscience from NeuroHype

  1. Kevin – nice overview of an area that will plague us for more years to come, I fear. As applications of bad neuro-science continue to fail, our industry will take the hit for doing research that doesn’t predict well. Worse, it is a self-inflicted wound that could have been avoided. Not all suppliers are bad at this or misleading – I’ve met some good ones who don’t overhype and have pretty good research backup to support what they’re doing. Most are clueless.

  2. A good article. The analogy I use (I was sort of a neuropsychologist once) is that you won’t be able to work out the plot to a soap opera by monitoring the electrical signals coming from the TV’s circuit board. That is where are at the moment. I love neuroscience, but I think you really need to understand what sort of data you are getting. Mostly it is noise….

Join the conversation