By Paul Richard McCullough
Using mixed modal data collection, that is, using two or more data collection methods to collect data from the same sample population, eg, phone and online, is not bad research, it’s gag-awful research. Anyone who tells you differently is selling something.
Commercial marketing research is an interesting business. Like most businesses, we focus almost all our time and energy on execution, on implementation. We’re all about getting the job done. But we’re also about finding truth. Or as close to truth as we can afford.
Marketing researchers are like data engineers. We work hard to build our bridges and our skyscrapers as fast and as economically as possible. But it’s also important that they don’t fall down, too. Fast and cheap is important but its not enough. In marketing research, bridges and skyscrapers are built with accurate, valid data.
Mixed Modal is tempting, especially when you’re faced with a difficult field job. You’ve already sold the project in, perhaps, and then found out you can’t get all the completes you need online. If you switch completely to phone or face-to-face, your budget will blow sky high. Maybe you can supplement your cheaper online data with a minimum number of more expensive phone interviews. This is done all the time, right? Right. Besides, nobody will even know. WRONG! Your data will know. And your data will suck.
The problem is we do this all the time. One of the largest sample vendors in the world proudly boasts “reaching respondents … via Internet, telephone, mobile/wireless, and mixed-access offerings (emphasis added).” Mixed Mode is a standard solution to difficult field problems. I understand the temptation. But the need for mixed mode to be a valid solution does not make it a valid solution. It does not work. Find another solution.
I recently had a client call me to ask my advice on a problem he was facing. He had collected brand imagery data online and supplemented with some phone interviews. His problem was that the data from the two modes were completely different. Not close. His question to me was two-part: 1) had I encountered this before? And 2) did I know of a valid way to adjust the data so that the two data sets could be justifiably pooled. I had and I didn’t.
I am privileged to know personally some of the brightest marketing scientists in the world. So I sent an email out to my little community of brainiacs and asked them the two-part question my client had asked me. The responses I received were uniform, surprising and disappointing. Yes, they had each encountered such problems and no, they had no idea how to deal with them. No idea! Smartest people in research. No idea. Where does that leave you and me?
For a recent Customer Sat Study, my client suggested supplementing the online sample with telephone interviews of some larger customers. The idea was to insure that key accounts would be represented in the final analysis. The data were so divergent from the online results that our only option was to report them completely separately. A rationale could be reasonably constructed to explain the differences based on the differences between key accounts and regular customers. An equally credible rationale could be constructed to explain the differences based on the impact of a personal interview versus an anonymous one. In fact, the differences in sample profile are confounded with differences in data collection so it is impossible to know why the data are different. We couldn’t, in good conscience, pool the data. We were forced to report them separately as if they were two separate studies. More work for us and less satisfying for the client.
Another problem is that more often than not the researcher chooses to pool the data from the two (or more) data collection modes so that no one ever knows the amount of bias that has been introduced into the data because no one has bothered looking. Pooling mixed mode data is like the little kid who puts his hands over his ears and starts yelling so he doesn’t have to hear his mother telling him to clean up his room.
Mixing data collection modes is bad research and no amount of convenience or expediency will change that. Either don’t do it or put your hands over your ears and start yelling.