Civicom: Your project success is our number one priority

Biased Research, No Matter What the Methodology

In a world where any methodology choice can introduce some bias to your data, it's imperative to understand how your findings may skew because you chose phone over online (or online over phone, or river sample over panel sample, or any other choice you made).


Editor’s Note: This was mistakenly published under my byline when it should have been attributed to Ron Sellers. Sincere apologies to Ron for the oversight and to our readers for any confusion!

By Ron Sellers

There are endless discussions about methodologies and sampling today.  Are phone interviews representative anymore?  Have online interviews ever been representative?  Is social media sample superior to online panel sample?  Is river sample viable?  Should we all go back to doing mail surveys as the only way to reach a representative sample?

While unfortunately I don’t have all the brilliant answers to those questions, it’s time to insert a word about how methodology choices – whatever those choices are – can affect your research.

I bring this up because of a study I just read.  It’s all about how donors age 60 and older are frequently giving online to non-profit organizations.

Just one little issue:  the study was only conducted online.  Anyone else see a problem here?

Hmmm, I have some other good ideas:

  • I can interview people at truck stops to see what percent of the American population drives a Kenworth.
  • Maybe I should sample residents of Eugene, Oregon, to see what is the proportion of Oregon Ducks fans in the national population.
  • Or possibly I can go to a major RV show where older people have come from all over to evaluate $100,000 vehicles, and do “a nationally representative survey” that shows how the senior population is far more upscale and active than previously thought.

Sadly, while the first two examples were jokes, the third is actually a “survey” I saw many years ago in the news.  But while all three examples are a bit over the top, the same kinds of bias issues are involved with choosing a methodology for any study.

I’m not going to argue here whether online sampling is superior or inferior to phone sampling, whether online panel research can be considered representative, etc.  The truth is, if you’re in the research business, you’re probably going to use both of these methodologies at some point.  The trick anymore is not to avoid any type of skew, but to understand what type of skew your selected methodology may be bringing to the data.

The most recent figures I’ve seen from Pew Foundation show 53% of Americans do not use social media, and other research has suggested that the people who use it differ from those who don’t (as well as the fact that heavy users differ from lighter users).  Social media sample might work well for a study about how people use social media, but might not work so well if you’re studying how Americans form and maintain relationships, for example.

I’ve seen estimates that over 20% of Americans do not have a landline, and those people differ significantly from the 80% who do (much younger, greater ethnic diversity, etc.).  A landline phone study might work well if you’re trying to interview people over 50 about their investment habits, but maybe not so well if you’re trying to determine what Americans think about cell phone service.

Online panels entirely exclude the 20-something percent of Americans who do not use the Internet at all, as well as everyone who uses it, but is not comfortable enough with it to want to do surveys online (which describes many older adults, for instance).  Panel sample might be a great way to do rock music testing or advertising evaluation among young mothers, but might not be such a good choice if you need to estimate technology adoption rates.

There are times you may have no choice in methodology (e.g. if the client has no e-mail addresses for customers, their customer survey is probably going to be done by phone).  In those situations, it’s still critical to understand what questions cannot legitimately be asked, or where the data may have to be adjusted, due to the methodology.

In the case of the non-profit survey cited above, a major finding was that 51% of donors 60 and older give online, which is surprisingly close to the 75% found among donors under 40.  Unfortunately, the researchers did not adjust for the fact that the proportion of people 60 and older who don’t use the Internet is far higher than it is among younger adults.

Let’s say the proportion of Internet users among older donors is 65%, compared to 90% among donors under 40 (fairly realistic guesses).  The adjusted (and correct) figures should be that 68% of donors under 40 give online, compared to 33% among older donors.  Now, instead of saying younger donors are only 47% more likely to give online, the reality is that they’re 106% more likely to give online than are older donors.  That’s a pretty huge difference because of a failure to recognize the bias caused by a methodology choice.

Unfortunately, we now do research in a world where arguably there is no such thing as a fully representative sample.  Everything has some skew.  Understanding what that skew is likely to be, and what you can and cannot hope to measure accurately as a result, is part of being a good researcher in today’s world.

Please share...

3 responses to “Biased Research, No Matter What the Methodology

  1. In addition to the sample frame bias Leonard explores, we should also consider response bias. My research career started in 1985 as a phone interviewer. We did random phone samples and we had to have at least 3X the phone numbers that we needed for target completes. Ten years later, as a project manager, I had to have 10X the phone numbers. Caller ID and voicemail service from the phone companies encouraged the general habit of most consumers to screen their calls. Now, as a client of MR vendors almost 30 years later, I see invoices for sample that is 20X the number of target completes. Calling 20 phone numbers to get one completed interview is not going to produce a representative result, unless the projectible universe of your research is households with a land line, without caller ID or an answering machine, with someone in the house who would rather take a survey than play Angry Birds on their iPad (or more likely they don’t have an iPad).
    In one of my research jobs, I did in-person in-home interviewing. This was very expensive for the client, however the results were actually projecible to the US population because the households were scientifically selected and every household that was occupied participated in the research. There were 360 households in the original sample and 315 completed interviews. The only segments of the US population not represented in the research were the homeless, people who live in short-term rentals (Residence Inn, etc), and people who would not answer their door or telephone over a 90-day period. That study was the only research I have seen in more than 25 years in the business where the cited confidence interval was actually valid.

    1. Great points Philip; thanks! Just to be clear though, Ron Sellers wrote this, not me; this is his exploration of methodological bias issues, i am just the conduit via the blog. 🙂

Join the conversation