Unearth the power of declared data
Insights That Work
Brand & Retailer tickets for all IIeX events now start at just $99! Get or give one today!

Mobile Research Quality: Absolute vs. Relative

I look around and see a buffet of webinars, whitepapers, and similar musings, mostly by authors who have never once been in a mobile research study as a participant. What I see little of, are discussions of mobile research 'quality.' This is a broad term, so let's define.


By Scott Weinberg

I’ve found myself in an intriguing position in having both bought and sold mobile research studies, as a client broker and as a supplier. These are interesting times, no? I look around and see a buffet of webinars, whitepapers, and similar musings, mostly by authors who have never once been in a mobile research study as a participant. The occasional RoR pops up, and of course the endless procession of ubiquity and adoption metrics. What I see little of, are frank discussions of mobile research ‘quality.’ This is a broad term, so let’s define.

Defining Mobile Quality

In this post I’m referring to mobile research, not mobile surveys. Essentially my primary definition here is how much we can ‘trust’ mobile research results. I view this topic in absolute (i.e., as a new methodology) and relative (i.e., compared to other quant/qual fieldwork) contexts. Also, some of this overlaps with security issues which I’ll touch on.

In the Absolute

When I think of mobile research quality in absolute terms, it’s hard for me to not lapse into relativistic comparisons, but I will table that for now. Focusing on this as a new methodology, we all know a few items: it’s a recent entrant into our world, the devices are seemingly everywhere, and people have them nearby at all times. And as tempting it is to give this a blanket endorsement as ‘automatically’ having quality, ‘because these things are so common,’ that would be unwise. I’ve participated in quite a few of these studies (I have 11 research apps running on my G4); my guess would be over 50, not sure the exact number. And yes the usual design issues are in effect: test it so it’s not buggy or looping, shorter is better, etc. Most of the studies I’ve participated in are actually quite thoughtful in their respondent experience. Mobile panelists are quite precious, and the ease one can give a 1 star savaging in the app stores is on supplier’s minds.

Regardless of the survey design, UX, etc; what is the key issue regarding mobile research quality? It is this: I’m standing in the (insert_name_here) aisle at Target, I’m taking a barcode scan of the correct or incorrect product with instant validation, I’m taking a picture of my receipt or maybe using the product at my home. I have provided evidence that I have indeed purchased said product, or been in the aisle examining the signage…etc. Moreover, an implementation of geofencing or geovalidation ensures I’m indeed inside the store during the study and/or when the submit button is reached. Am I sharing the ‘right’ answers re what I think of the product, signage, etc? No way to ever know that from any respondent, but why wouldn’t I share the truth? There are no social desirability effects and my incentive is arriving whether if I’m yea or nay on the product. Same goes for OOH ad-recall / awareness studies.

In the Relative

Let’s exit the vacuum and compare this methodology to traditional quant techniques. Having spent many (too many) years inside of online panel suppliers, I can attest to the enormous reliance on these panels to power primary market research. The sheer volume of panel-sourced survey completes is staggering.

Frankly, I think comparing mobile research quality to online panel quality is laughable. There is no comparison. This is a slam dunk in favor of mobile. Maybe you think I’m being glib…but if you’ve seen what I’ve seen you would be nodding in agreement. With the exception of invite-only panels, the amount of fraud in this space is greater than you’ve heard or read about. I’m not going to deep dive as it’s off topic but it goes beyond satisficing, identity corroboration, recruiting sources and other supplier sound bites used to reduce hesitation when buying thousands of targeted completes for $2.35.

Yes these apps are in the app stores, ergo anyone with a compatible device can install (and rate) them. Some do allow (or require) custom/private recruiting for ethnography, qual & b2b, but the bulk are freely available to the mobilized masses. Isn’t this then like online panels in that anyone can sign up? Yes, pretty close. So what’s the difference? One difference is that organized (yes, organized) fraud hasn’t infiltrated this space yet. So there’s that. The other difference is that because this space is app powered, the security architecture is entirely different, and stronger relative to online Swiss Cheese firewalls. Yes another difference is the effort required to secure an incentive; specifically the requirement of being in a physical location helps.

Effort = Good

There is effort required with these studies. You’re not sitting on your couch mouse clicking dots on a screen. Effort makes the respondent invest in the experience with their time and candor. There is also multi-media verification. For example, I’ve listened to OE audio comments, and I would encourage you to do the same if you need any convincing that these studies are not ‘real’ somehow (I can play some for you). Once you hear the tone, the frustration, interest, happiness, etc; your doubts about the realness of these studies will dry up. Incidentally, once you’ve heard OE audio, your definition of the phrase ‘Voice of the Customer’ is about to get quite a lot more stringent.

I’ll wrap this up and save more for future posts. Thank you for reading, I hope I gave you food for thought and we can enjoy watching this fascinating technology unfold together.

Please share...

5 responses to “Mobile Research Quality: Absolute vs. Relative

  1. This is an excellent and long overdue subject.
    Here is my two-pennies worth. In my view the industry is falling all over themselves beguiled by ease of access and data immediacy, when there are serious questions around the whole area of mobile data collection.

    Questionable representation?

    Let’s step back and look at the whole environment in which mobile plays a major role.
    Basically people use their smartphones for calling friends, searching, Facebooking, Twittering, etc. In other words these are in fairly constant use. Clearly then anything that cuts across my private social space is an intrusion. This suggests that those who willingly and actively participate in online surveys by definition have to be attitudinally different from the mainstream social media user.

    Why are they getting involved? Is it the money? Are these people really trustworthy? Just how genuine is their participation? What types of social media users are willing to give up their private social space to allow some corporate to extract information from them? Doesn’t this fly in the face of the whole rationale of social media as a place where you go to get away from exactly this corporate manipulation?

    The idea that mobile is “of the moment” is very appealing. Logic would suggest that capturing immediate behaviour by pictures, scans, quick surveys, etc. is very meaningful and potentially overcomes all the issues of memory uncertainty that plague traditional market research. But again ask yourself who these people are? Who would participate in this process, given the points I raise above about social space? It just seems to me the potential for attitudinal bias is huge. There are no stats I have yet seen on the turning off of geo-location data, but anecdotal experience tells me this is now a serious choice for social media heavy users. So unless you have a captive and definable panel, just how reliable is your representation of markets?

    Let’s take the traditional online panels which in theory should also suffer from the same attitudinal biases. In fact research shows this is true. My own experience, based on matched methodologies, confirms panel respondents tend to be significantly more media savvy and brand involved than the general population. Other studies confirm huge attitudinal differences in areas of societal concerns like care for the environment and individual rights and entitlements. So, we can rightly lob the same criticisms at panels as mobile. But there is one huge difference. Panel participation (or not) is unrelated to any issues to do with “the communications vehicle”. In the case of smartphones – and the point I made about personal social media space – there is almost certainly huge attitudinal differences between mobile panelists and the general public. Much more than I would expect for online panels!


    Despite every attempt to play this down, the truth is completing a survey on any of the current handsets (not tablets) is a royal pain. Mobility means that the signal quality is highly variable, screen sizes and variability make surveys look off-putting and too task ridden and anything over a few minutes is almost certainly being ditched part way by potential users. I would love to see the statistics on drop-out rates for mobile surveys!! I am a die-hard reader of the Financial Times and love the paper and its journalists. But the number of times I do not complete their surveys, (I always start!) due to slow uploads, page flip flops, need to scroll to see the full page, etc. tells me this is a real issue.

    Just Another Panel – Same Problems?

    A question that has always bothered me is how do we know who we are talking to when we run a mobile survey?

    Unless they are some in-house captive user panel where demographics and other classifiers can be added to the respondent’s record, how can we be sure we know who we are surveying? Well the fact is unless we have a “panel” it seems very unlikely that one could ever manage a push survey seeking interlocked quota samples. A potential mobile respondent is going to very annoyed after completing three screener questions only to discover they are not in the target audience. How will mobile survey players deal with this?

    Answer, just another panel, with all the faults that the writer attributes to online panels.

    Applications Limited?

    Let’s for the moment leave out survey work carried out by a company, with its own in-house data base of customers, who have opted into mobile surveys. This is clearly an excellent application with roles in customer satisfaction, product and service development, etc.

    But leaving aside this “captive” source, what applications will mobile have in the real world? I would argue very limited applications. Are we expecting research buyers to opt for a 5-8 minute mobile interview over a 20 minute online survey? Just who buys such short surveys and where would the economies come in once you start analyzing value rather than price? Does anyone truly believe you can just do a few cheapy, multiple short mobile surveys and get insights?

    Any serious research buyer is also going to be showing a sensible degree of doubt over the representation of mobile users from either push mobile surveys or even claimed rigorous mobile panels. Participants to mobile surveys just scream out “we are different” from the general market.

    Worse still, despite the opt-in commitment of mobile panels, I just wonder what the long term attitude will be to brands that use this vehicle for market feedback. It strikes me as an area where marketers are trespassing in a social media environment that isn’t exactly corporate friendly.

  2. I very much enjoyed reading your article Scott, it is a question i think needs to be more openly discussed. Mobile Research isn’t always the solution to everything, but i think there are a few key areas that we need to consider and that maybe Chris is not aware of.
    * Demand – People/Respondents/Panelists want to do surveys on their mobile devices, just like we do banking, social media, shopping, read news/sports updates, get directions, write emails etc. All of these tasks are moving (or have moved) to mobile devices, not just phones but tablets. We can’t keep pushing online if what people want is mobile. Every country is different but mobile is a global growth area!
    * Mobile Technology Advances – Chris mentioned doing mobile surveys and loosing connection, this is just one way of completing a survey on a mobile, its likely its not actually a mobile optimised survey either so experience is bad. Research clients don’t always realise there is ‘Mobile Web’ (Online surveys taken on mobile devices) and Mobile app surveys (surveys served through an app) and the experience is different and unfortunately distorts opinions of Mobile Research. Lumi Mobile’s Mobile apps (Lumi SURVEY, Lumi PANEL, Lumi mCAPI) allow for offline survey completion, even if you are online the survey will be uninterrupted as its not started until its fully downloaded to your device (iOS, Android, BlackBerry, Symbian, including old Nokia internet enabled feature phones & tablets too)
    * Survey Length – I agree shorter is better but we should question the value of longer 20-40minute online (or telephone) surveys and the data quality there. People provide much better quality answers if we engage with them in regular short burst. However Mobile surveys can be longer than 7-10minutes (if they really need to be) especially with offline completion and the option to pause surveys, or why not section the longer survey up into chunks, even just offering chunks makes people feel more satisfied even if they gobble its all up at once (which Research on Research has found they do tend to complete all at once even when offered chunks).
    * Additional Insight – Photo capture, Location validation, Geo triggering, Audio capture … all of these aspects allow us to engage with panelists/people more and give clients ways on capturing new insight and validation increasing data quality.
    * Its not always the answer – Some people do get bad experiences with mobile, but generally as they are trying the ‘ stick a plaster on’ approach. Mobile is great when used correctly, thought through and often along side other methodologies. Its important to get consultancy and understand all the best communication approaches and options. Diaries and ethnography are great for Mobile particularly.
    * Mobile panels are growing fast – Many of Lumi Mobile’s clients are large (and small) panel companies and have mobile app panels across the globe ready and waiting for an engaging new mobile survey.
    …. I could go on but that’s it from me for now! Thanks Chris and Stuart!

  3. My cellphone provider sends me mobile survey via text messaging based on phone satisfaction with regards to coverage, signal, & plan with more or less 7 simple questions. I usually respond and they do it about twice a year. I find it better than their previous approach (4 years ago) of phone-in surveys that I had tried to avoid answering.

Join the conversation