The ultimate guide to transforming your business with business insights

How Myths Are Formed! The Law Of Small Numbers & Market Research

Daniel Kahneman's book 'Thinking, Fast and Slow' points out that researchers have their own bias - the law of small numbers! Is this bias to blame for many modern day myths?


By Neal Cole

Much of the attention given to Daniel Kahneman’s book Thinking, fast and slow has been about how people make decisions and the implications for models of consumer behavior. However, the book also points out that researchers have their own bias – the law of small numbers! Is this bias to blame for many modern day myths?

What is it?

It’s a general bias that makes people favor certainty over doubt. Most people, including many experts, don’t appreciate how research based upon small numbers or small populations can often generate extreme observations. As a result people have a tendency to believe that a relatively small number of observations will closely reflect the general population. This is reinforced by a common misconception that random numbers don’t generate patterns or form clusters. In reality they often do. Kahneman makes this observation:

“We are far too willing to reject the belief that much of what we see in life is random.”

Why are researchers prone to the law?

Kahneman acknowledges that researchers (social and behavioral scientists in his case) have too much faith in what they learn from a few observations:

  • They select too small a sample size which leaves their results subject to a potentially large sampling error.
  • Experts don’t pay enough attention to calculating the required sample size and instead use rules of thumb.

A well known example of this is the supposed ‘Mozart effect’. A study suggested that playing classical music to babies and young children might make them smarter. The findings spawned a whole cottage industry of books, CD and videos.

The study by psychologist Frances Rauscher was based upon observations of just 36 college students. In just one test students who had listened to Mozart “seemed” to show a significant improvement in their performance in an IQ test. This was picked up by the media and various organizations involved in promoting music. However, in 2007 a review of relevant studies by the Ministry of Education and Research in Germany concluded that the phenomenon was “nonexistent”.

What is to blame for the bias?

Kahneman puts much of the blame for people being subject to the bias of small numbers on System 1. This is because system 1:

  • Eliminates doubt by suppressing ambiguity and automatically constructs coherent stories that help us to explain our observations.
  • It embellishes scraps of information to produce a much richer image than the facts often justify.
  • Is prone to jumping conclusions and will construct a vision of reality that is too coherent and believable.
  • Humans are pattern seekers and look for meaning in their observations.
  • People do not expect to observe regular patterns from a random process and when they do see a potential correlation they are far too quick to reject the assumption that the process is entirely random.

Overall Kahneman believes people are prone to exaggerating the consistency and meaning of what they see. A tendency for causal thinking also leads people to sometimes see a relationship when there isn’t one.

Questions for Researchers?

Kahneman’s work raises some important questions for researchers and customer insight specialists.

  • We are pattern seekers, and we often use small samples in qualitative research and usability testing. However, is there a tendency to extrapolate the findings from small scale studies to the wider population?
  • Do researchers sometimes select too small a sample size in quantitative studies and experiments? Is this because they use a rule of thumb rather than calculating the statistically required sample size?
  • Are we too quick to reject a random process as being truly random?

As with all forms of bias reality is characterized by a spectrum of behaviors from the rigorous to the lax. From my experience on the client-side of research there are a number of reasons why research sometimes falls foul of the bias.

Observations from a client-side researcher!

  • Usability tests evaluate actual behavior so normal sampling rules don’t apply! 

I read this recently in a blog about website usability testing. This is a myth.  The reason for only undertaking a small number of tests is because there are diminishing returns. After 5 to 10 tests few new usability risks tend to be generated. The law of small numbers still applies even when it involves human behavior.

Like any form of qualitative research usability testing is a valuable way of uncovering potential risks and perceptions of a new design. However, just like traditional qualitative research, usability testing still benefits from being validated by using quantitative techniques (e.g. A/B or multivariate testing).

  • Treating qualitative findings like quantitative data!
I wasn’t going to include this as it seemed too obvious. I changed my mind when I read a post on a LinkedIn group which asked; Can qualitative become quantitative?
The answer is normally no as qualitative studies usually rely on small numbers and a less structured approach to questionnaire design. This means that each interview is unlikely to be identical to the others and so are not comparable.
However, the post reminded me of a number of occasions where I witnessed people latching onto the number of respondents choosing an option in a qualitative study as being indicative of the frequency of behavior in the wider population.  This is a risk when quoting numbers or proportions in qualitative research. Non-researchers sometimes have a tendency to interpret proportions in qualitative research as indicative of actual customer behavior.
  • Senior Management comprehension of sampling & statistics:

When I worked for a life insurance company I was constantly being challenged about the reliability of findings from small samples. The reason for this was simple. Almost all the senior management were actuaries. This meant they had an excellent grasp of the potential bias caused by sampling. This had the benefit that other departments were unlikely to be able to misuse research based upon small samples because they would meet the same challenges as I did.

  • DIY research tools (e.g. Surveymonkey):

DIY tools have given non-researchers easy access to the means of conducting and analyzing their own surveys. I am not against the use of these tools. Unfortunately though many non-researchers who use DIY research tools may not have sufficient knowledge of sampling and statistics to correctly design or analyze data from surveys. If this is the case it suggests that non-researchers may be particularly prone to bias resulting from the law of small numbers.

  • Correlation does not mean causation!

Key driver/multiple regression analysis is often used for modelling the influence of independent variables on a single dependent variable. However, such models can only infer a causal relationship and further experimentation and analysis is needed to support such a relationship.

The nature of survey data (e.g. independent variables are often correlated) and sample sizes does not always justify the use of such statistical techniques. Big data can play a key role here in providing more robust evidence for causal relationships. But without evidence to suggest a reason for a causal relationship it is important that a correlation between two variables is treated with the utmost caution.

  • Death by PowerPoint!

This is a training issue, but I frequently see PowerPoint slides that highlight differences between sub-samples that are not statistically significant. In most research agencies the modelling and analytics are carried out by a separate department from the  account executives. This is not a problem provided the account executives who present data have sufficient understanding of the nature and limitations of the analysis they present. From my experience this is not always the case.

  • Budgets and treating research like a commodity!

When companies treat research like a commodity and constantly expect to make cost savings there is a danger that sample sizes will be cut to the bone. As a result studies don’t deliver the required level of reliability. I briefly worked on a multi-country brand and advertising tracking study that only had sufficient sample to analyze on a three monthly basis. This proved very frustrating as it wasn’t sufficiently sensitive to measure the short-term impact (i.e. monthly) of bursts of advertising activity.

  • Pressure to identify insights!

Researchers are by their nature pattern seekers and this can make them susceptible to seeing phenomena that are generated by a purely random process. There is nothing wrong with this provided we treat such patterns with caution and seek further data or more robust research to test our hypothesis. This is why researchers need to be trained to present results in a balanced and critical way so that management don’t jump to conclusions.

  • Reporting continuous data too frequently! 

There is a growing tendency to expect to have data on tap. This is a characteristic of the digital age. But sometimes this leads to pressure to analyze and communicate continuous survey data too frequently. I came across a continuous customer satisfaction survey a few years ago where high level Key Performance Indicators were communicated to each business area on a monthly basis. However, despite most of the base sizes being far too small to identify any significant differences, the Customer Insight Manager was expected to comment on  changes from the previous month’s score. This encouraged the inventing of reasons for changes that were not even statistically significant.

  • Implications: 

The law of small numbers gives researchers an interesting insight into our own potential fallibility. It warns us against listening to our intuition and relying on rules of thumb for determining sample size. Kahneman also provides a useful reminder to be careful on how findings are communicated when dealing with data from small numbers. Research and experimentation is after all an iterative process, so we should always be looking to validate results, whether from large or small scale studies. It is only through trial and error that we are ultimately able to separate insights from myths.

Thank you for reading my post and I hope it provided some useful insights.

Please share...

14 responses to “How Myths Are Formed! The Law Of Small Numbers & Market Research

  1. All of your points are unassailable as long as the world you live in is the world of prediction. I would agree with the general observation that we want the world to be more knowable than it really is. However, I find it somewhat ironic that your treatise on how research should be is based on the observation of one individual in a book!
    As a statistician in a research environment, my challenge is living between being right and being helpful. You may say, “You can’t be helpful if you’re not right!” If I consider that “all measurement is a compromise with reality” (Jerry Zaltman) being right is as comparable an illusion to believing in insights from few observations. When I compound the speed of the business and the limits of the budget, the reality is that “valid” sample sizes are out of reach for most questions we deal with. In contrast, I would observe that in the quest for insights, don’t dismiss the power of n=1!

  2. Small sample size is particularly problematic in niche markets. At our b2b market research company, we would be nothing without our PhD analysts to identify which small numbers are meaningful and which aren’t!

  3. A great article. We agree that people need to be very conscious of sample sizes and confidence intervals when designing their research. So much so that we developed a handy, free, significance testing app, which you can find on the Bonamy Finch website.

    Hope this doesn’t upset any advertising rules – it’s supposed to be genuinely helpful!

Join the conversation