Download the

Biasing Your Research by the Act of Doing Research


Editor’s Note: GreenBook’s own resident curmudgeon Ron Sellers offers up a great “meat and potatoes” post today on the inherent dangers of bias in longitudinal research, which raises questions about panels and communities as well.

Although Ron doesn’t get into it here (hopefully he or one of our other authors will soon) it leads to the idea that perhaps a truly unbiased sample is impossible in this era of over surveying populations regardless of the recruitment method. That factor can and should be accounted for in our discipline, but an even bigger question is: in an always on digital society where “reality TV” dominates broadcast media, Youtube creates viral sensations, and individuals are striving to brand themselves via myriad social media channels does the whole principle of the observer effect have to be rethought? In effect, our society expects to be observed and that very expectation changes behavior, thus introducing bias. Perhaps Behavioral Economics and virtual ethnography holds the key here since it certainly seems as if we need to rethink our assumptions about the possibility of achieving a sample free of bias from the observation effect.

Until we have that debate, Ron brings up some great points about what to watch for in more traditional approaches. As always, reading Ron’s musings is well worth the time.


By Ron Sellers

One of the fundamental tenets of research is not to affect the research subjects (and therefore the results) by the simple act of doing the research.  For instance, anthropologists often worry that by observing their subjects, they are impacting the behaviors of those subjects.

This is often given as a criticism of focus groups:  people may react in unnatural ways when they’re in a room surrounded by microphones, a big mirror, and a professional moderator who’s asking them about their last purchase of bathroom tissue.

Yet a greater – and overlooked – danger in this applies to longitudinal studies (where the exact same respondents are tracked over time).

Years ago, I participated in a mail panel (remember those?).  Every month, a new set of questions would arrive in my mail box.  One day, I received a set of questions about automobile advertising – for which brands I had ad recall, what the message of the ad was, etc.  I completed it without any problems.

The next month, I got the same questions again.  And then again the next month, and the next.  At some point, I knew what was coming each month – amongst the questions about pet ownership, allergy medication, and other forgettable issues would be the same set of questions about automotive advertising.

Before long, my awareness of automotive advertising was heightened considerably.  I would think, “Oh, there’s a new Pontiac ad – now I can say I saw something for Pontiac this month.”  In other words, my advertising recall rose substantially simply because of the research in which I was participating.  The researchers were no longer getting real-world responses from me because I had been impacted by the act of completing the research.

This has serious implications for any longitudinal research.  Let’s say I’m completing a survey about tea.  I’m asked if I’m aware of Lipton, Bigelow, Stash, Tazo, and other brands.  I’ve never heard of Stash, and because it piques my curiosity, I look it up.  Maybe even buy a box.  Maybe even start drinking it regularly (it is pretty good tea).

Six months later, I complete another questionnaire about tea.  I can now tell the researchers that not only am I aware of the Stash brand, but I am a regular user of the brand.  Because I like Stash, I also have a heightened awareness of the brand’s advertising, so I recall a number of their ads.

Would this impact the research findings?  It certainly would if Stash commissioned the research to find out whether their new advertising had the ability to reach people who were unaware of the brand and convert them to product buyers.

While something like this may not happen with very many respondents, the earlier example of tracking my advertising awareness for the same product category month after month very easily could impact a lot of the research participants.

And while it’s not involving longitudinal studies, there’s another application of this issue that applies when using an online access panel.  There are companies that use the same methodology and questions with multiple clients.  This is common in advertising research, for example – particularly since each client’s ads can be compared to a set of norms maintained by the research company.

But if that company does a lot of this testing, and returns to the same panel respondents over and over, you could be a victim of this priming effect without even conducting a longitudinal study.  This happened a few times while Grey Matter Research was evaluating panels for our More Dirty Little Secrets of Online Panel Research report.  Our panelists were asked to review advertisements in separate surveys.  But the same research company was using the same measurements to test different ads – only problem was that the same panelists kept getting opportunities to complete these studies.  So after reviewing ads for candy, and then for investments a few days later, our panelists knew exactly what would be asked (and therefore what to look for in the ads) when they were asked to evaluate automotive advertising later that week.  By returning to the same people over and over for this testing, the research company was influencing their behavior, which influenced the research.

Like most other tools, longitudinal research has its place in the research tool box.  But it should be used only with the understanding that the act of conducting the research very well may influence the research itself.  If there is no way to avoid or control for that possibility, it may be that another methodology is a better bet for the project.

Please share...

3 responses to “Biasing Your Research by the Act of Doing Research

  1. Ron – good points to watch out for, but also suggests pretty bad research design. Way back in time, in my Quaker days, we did a longitudinal study where we split our panel into matched groups (don’t remember how many) on a variety of criteria and only surveyed one group at a time. So any one person in the study may have been surveyed every 3 or 4 months instead of monthly, but we were able to look at the data monthly. Good paper by Chandon, Morwitz, and Reinartz, Journal of Marketing 2005 on this topic – asking purchase intent actually increases purchase intent.

  2. Three points:

    1) One key to minimise the effect is the time lag between repeated surveys. Using split panels as the previous comment suggested can (if the panels are large enough) allow a significant gap which will reduce recall of the questions
    2) it is always risky to repeat unprompted questions that are followed by prompted questions – and of course even unprompted questions can affect behaviour (e.g., make a respondent more aware of or willing to take notice of advertising in that general product category)
    3) There is a classic design that has big advantages. This involves having the following groups:
    Group 1:Wave 1 and Wave 2
    Group 2: Wave 2 only

    Sampling should be random or match the key characteristics of the two groups.

    Data from Group 1 allow measurement of change WITHIN THAT GROUP. This is subject to the effect of history (including having done the survey before) but allows “churn” to be quantified (change from unaware to aware, change from aware to unaware) compared to stability (staying unaware, staying aware). much more informative than two independent waves that only measure awareness at each time point. This would of course be much more powerful when applied to measures such as purchase or purchase intention.

    Data from Group 2 can be used to assess whether the outcome in Group1 at Wave 2 differs from a group NOT exposed to the survey before.

    Of course you won’t get everyone from Group 1 Wave 1 to complete at Wave 2, so the characteristics and responses of those who drop out also need to be assessed.

    When response is (as it always should be) voluntary, then “natural attrition” can provide a good proxy for such a design, and can allow rolling, multiple waves to be completed. If you get roughly half of the Wave 1 Group 1 to compelte in Wave 2, and could achieve the Same from Group 2 in a third wave and so on, you can develop a very powerful longitiduinal study that controls for the effect of previously having been surveyed.

    Yankelovich Partners in the USA strongly advocated such rolling designs back in the 1990s. I have tried to “sell” them in Australia, but everyone is so frightened of the priming effect that I have never managed to convince a client it is worth doing. However, as long as the priming effect is small or does not interact with the degree of effect of say exposure to (other) advertising, it should be a very cost effective design.

  3. Don’s rolling matched samples is a very effective way of measuring repeat bias – but stepping past longtitudinal research, for consumer studies we can dip in and out of small groups without disturbing the larger pool, but for B2B or small populations, every survey is like a piece of marketing communication to the whole base.

    More to the point, in B2B research you often find respondent’s expecting the client company to respond directly to issues from the survey (“a waste of my time if it doesn’t have an effect”). Research in B2B markets is therefore more of part of the ongoing dialogue between supplier and customer, than a pure measurement process. This includes feeding back results and outcomes even though this might bias the next round of research.

Join the conversation