Editor’s Note: This was mistakenly published under my byline when it should have been attributed to Ron Sellers. Sincere apologies to Ron for the oversight and to our readers for any confusion!
By Ron Sellers
There are endless discussions about methodologies and sampling today. Are phone interviews representative anymore? Have online interviews ever been representative? Is social media sample superior to online panel sample? Is river sample viable? Should we all go back to doing mail surveys as the only way to reach a representative sample?
While unfortunately I don’t have all the brilliant answers to those questions, it’s time to insert a word about how methodology choices – whatever those choices are – can affect your research.
I bring this up because of a study I just read. It’s all about how donors age 60 and older are frequently giving online to non-profit organizations.
Just one little issue: the study was only conducted online. Anyone else see a problem here?
Hmmm, I have some other good ideas:
- I can interview people at truck stops to see what percent of the American population drives a Kenworth.
- Maybe I should sample residents of Eugene, Oregon, to see what is the proportion of Oregon Ducks fans in the national population.
- Or possibly I can go to a major RV show where older people have come from all over to evaluate $100,000 vehicles, and do “a nationally representative survey” that shows how the senior population is far more upscale and active than previously thought.
Sadly, while the first two examples were jokes, the third is actually a “survey” I saw many years ago in the news. But while all three examples are a bit over the top, the same kinds of bias issues are involved with choosing a methodology for any study.
I’m not going to argue here whether online sampling is superior or inferior to phone sampling, whether online panel research can be considered representative, etc. The truth is, if you’re in the research business, you’re probably going to use both of these methodologies at some point. The trick anymore is not to avoid any type of skew, but to understand what type of skew your selected methodology may be bringing to the data.
The most recent figures I’ve seen from Pew Foundation show 53% of Americans do not use social media, and other research has suggested that the people who use it differ from those who don’t (as well as the fact that heavy users differ from lighter users). Social media sample might work well for a study about how people use social media, but might not work so well if you’re studying how Americans form and maintain relationships, for example.
I’ve seen estimates that over 20% of Americans do not have a landline, and those people differ significantly from the 80% who do (much younger, greater ethnic diversity, etc.). A landline phone study might work well if you’re trying to interview people over 50 about their investment habits, but maybe not so well if you’re trying to determine what Americans think about cell phone service.
Online panels entirely exclude the 20-something percent of Americans who do not use the Internet at all, as well as everyone who uses it, but is not comfortable enough with it to want to do surveys online (which describes many older adults, for instance). Panel sample might be a great way to do rock music testing or advertising evaluation among young mothers, but might not be such a good choice if you need to estimate technology adoption rates.
There are times you may have no choice in methodology (e.g. if the client has no e-mail addresses for customers, their customer survey is probably going to be done by phone). In those situations, it’s still critical to understand what questions cannot legitimately be asked, or where the data may have to be adjusted, due to the methodology.
In the case of the non-profit survey cited above, a major finding was that 51% of donors 60 and older give online, which is surprisingly close to the 75% found among donors under 40. Unfortunately, the researchers did not adjust for the fact that the proportion of people 60 and older who don’t use the Internet is far higher than it is among younger adults.
Let’s say the proportion of Internet users among older donors is 65%, compared to 90% among donors under 40 (fairly realistic guesses). The adjusted (and correct) figures should be that 68% of donors under 40 give online, compared to 33% among older donors. Now, instead of saying younger donors are only 47% more likely to give online, the reality is that they’re 106% more likely to give online than are older donors. That’s a pretty huge difference because of a failure to recognize the bias caused by a methodology choice.
Unfortunately, we now do research in a world where arguably there is no such thing as a fully representative sample. Everything has some skew. Understanding what that skew is likely to be, and what you can and cannot hope to measure accurately as a result, is part of being a good researcher in today’s world.