Editor’s Note: Every once in a while, someone sends in an article that is like a slap in the face. This is one such article. While many have considered phone interviewing basically outdated, except for some specialized needs, Simon Chadwick describes a phone method that yields not just an adequate sample, but a superior sample. If you combine the method he describes with some of the chatbot/AI approaches becoming available, there could be a whole new future for phone interviewing. Fire up the DeLorean!
I am going to go out on a limb here and be rudely controversial (so what’s new? Ed.) All too often, researchers, today are willing to accept the mediocre in deference to the god of Mammon. Over the past ten years, as budgets have continued to be tight, we have resigned ourselves to the siren song of GEMO (Good Enough, Move On). Thanks, General Mills, for that outstanding contribution to our profession.
Good Enough, Move On
The underlying philosophy of GEMO is that, if it’s directionally correct, it’s good enough for the purpose at hand and we should get on with the next steps. Whether the purpose is launching a new product, building brand equity, maximizing the customer experience, segmenting our customer base or deciding on pricing, if it’s “in the ballpark” let’s accept the results, however shaky the design or sample, and “move on”. This has meant that we have got ourselves comfortable with convenience sampling, DIY platforms being used by the uninitiated, and any other number of compromises in the way in which we attempt to guide our businesses to success.
It has also meant that we have also honed our innate inner ostriches to perfection. In a series of interviews, I did with heads of insights functions recently, I asked: “Are you concerned at all about online sample quality?” The answer was a resounding “yes” – hence the sort of commentary that you see from Bob Lederer and the types of initiatives personified by SampleCon. But when I asked them “is it a problem for you?”, every single one of them gave a variant of the answer “oh no, we sorted all that out with our providers”. The researcher’s equivalent of “my dog doesn’t have fleas”.
Let’s grudgingly accept for the moment that, where some decisions are concerned, GEMO may be warranted. Let’s concentrate instead on where research quality and accuracy really is important. And let’s take arguably the most important client of 2016 – Hillary Clinton. She had a real need to know with a great deal of accuracy the way in which the vote was going to go – not just in the aggregate, but state by state. The trouble is that online polling is not that accurate and outbound phone polling is too expensive. Therefore, if you believe (as she did) that some states did not need attention (they were already in her column), she did not have to poll there. But if she had had at her fingertips a method that was not only really accurate and really cost-effective, then maybe (maybe) she would have polled in those states. And maybe she would have discovered Pennsylvania, Wisconsin, and Michigan slipping away from her.
Or let’s peer into the future. What if a citizenship question is put into the census? Researchers are pretty sure it will reduce the response rate, especially among Spanish-dominant Hispanics. How could we measure if this is true if these same people will not respond to an online survey or pick up the phone? What if there is a way of doing it – with accuracy? Then there are the whole loads of predictive issues on which the future of entire industries will depend and the shape of society as well. The trajectory of the opioid crisis, the future of the marijuana industry, the diffusion of electric cars, overall consumer spending or corporate reputations (think Boeing right now). What if there were a method that could accurately predict all of these – but at the cost of online?
Well, there is. Time to take our collective heads out of the sand and recognize that there are methods out there that cost-effectively and actively improve the quality and accuracy of research. One such method is Redirected Inbound Calling Sample (RICS). In essence, what this does is to intercept failed phone calls – from any type of device – and offer the opportunity to take a survey to callers before they go on and try to rectify their calls. There are 30 million of these types of calls made every day – and we are all susceptible to it – rich, poor; black, white; gay, straight. The average cooperation rate is 6 – 8% and the resulting sample the closest thing to DFRDD outbound phone research, with an accuracy of results to match. What’s more, the system accesses hard-to-reach samples with ease – Spanish-dominant Hispanics (documented and undocumented), Native Americans, the elderly, young white males. And all at a price that is equivalent to online.
RICS has been tested to death by the academic and non-profit communities and the subject of numerous AAPOR papers. All agree – this is back to the future. Why? Because it restores a principle of sampling that we have ignored now for nearly two decades – the equality of probability that anyone will have the opportunity to participate.
If that’s not good news, I don’t know what is.
To find out more about RICS, go to www.reconnectresearch.com. There you will find numerous academic papers attesting to its quality. Then you too can go back to the future by giving this approach a try.