Don’t you just love those little opportunities life gives us to come face-to-face with our own hypocrisy and intellectual blind spots? They usually are not much fun, but if embraced and learned from they can be truly breakthrough moments.
No, I am not going to turn the blog into a personal self-help diary, but since I have inadvertently made myself a visible figure within our industry I think it’s important to publicly acknowledge when I screw up. And I just did, with my own pet project: GRIT. Here is how the most recent iteration of my favorite tracking study has taught me an important lesson that I think deserves to be shared with the industry.
Like so many of us, I am pretty busy and sometimes suffer from tunnel vision, only paying attention to what is right in front of me. In the case of GRIT that means that I was so occupied with other priorities that when the deadline loomed to launch the Winter 2012 phase of the study I defaulted to the all too familiar intellectual laziness of “Well, this is how we’ve always done it; it will be OK. I’ll make changes to it next time.” I justified this with the same arguments both clients and suppliers use everyday when making project choices, especially when dealing with trackers:
- Changing things will cause chaos in the tracking data; I don’t have time to deal with the implications of that
- It’s not that long; respondents will understand it is important and give us the time
- I REALLY need the data from that question; I can’t cut that one (or that one, or that one…)
- It will be too expensive to redesign the study now
- I don’t have time to devote to reworking this from the ground up; it works fine as is
And so on and so on yada yada yada. You know the drill. You deal with it all the time; the life of a researcher is filled with lots of trade-offs every day. So I convinced myself that I could get by with kicking the can down the road one more time and would deal with updating the methodology and instrument next year.
But I should not have succumbed to that thinking. It was a mistake. The game has changed, and our very own industry is proving it to me once again.
First – Several folks emailed me to tell me that the online survey wasn’t compatible with their tablet or mobile device. There goes the idea that folks are not participating in online surveys via mobile devices; clearly they are and me, Mr. Mobile Research, one of the most vocal advocates of “MR must change!” didn’t plan for it in my design. That’s a pretty big “oops” moment.
Second – People don’t seem to be so eager to take a long survey. We’re seeing a higher drop off rate than I have ever experienced with this study in the 10 years we’ve been conducting it. I know I find it hard sometimes to devote 15-20 minutes of my day to non-vital requests; why the heck should I assume that isn’t the case for everyone?
Third – The subject matter may be important and many people would say it is of interest to them, but the user experience is pretty vanilla. It’s certainly not particularly engaging visually or structurally. It’s just your average online survey. No gamification. No cool visuals or innovative question types. It’s more of a chore than anything, and I should know better.
And that is how I was brought face to face with my own hypocrisy. I am loathe to have to admit any of this, but I think we’re learning a lot right now and that is the silver lining in situations like this.
Now let me be clear; I am proud of GRIT and all that we have accomplished with it. By no means do I think this study is sub-par or of poor quality. It’s just a bit out dated. It was spectacular a few years ago, but I failed to let it evolve to meet the current reality. I still believe that the relevance of the information and value of the insights we generate are second to none. But is that enough? The most wonderful and perfect survey instrument in the world is still just that: a survey. And therein lies the issue, because the survey just isn’t the best tool in the toolbox anymore: it’s just the most worn down with use.
One really interesting thing for all of us involved with GRIT is that we get to be a client, a researcher, and a respondent at the same time and this experience reinforces my own conviction that our industry (myself included) has a ways to go in order to catch up with the rest of the world. I believe that we are getting a taste of our own medicine now regarding providing an inferior engagement and user experience with the survey model in general.
We are in the same spot many of our clients are with trackers: held captive by the original design choices. I’d LOVE to figure out a different way to do this – frankly I think this survey is emblematic of many of the things that are wrong with MR today, but the tension of maintaining tracking measures which provide a significant piece of the value of the study vs. making changes that enhance the user experience and engagement is a tough area to navigate. In this scenario the lesson for me is that sometimes you just have to bite the bullet.
I’m pretty convinced that next year, tracking data be damned, we’re going to re-think and rebuild this whole initiative from the ground up in order to “walk the talk” a bit more. That is certainly what I would tell a client dealing with a similar issue, so this is a case of “Physician, heal thyself!”.
All that said, I still hope that you’ll take a few minutes to share your time, experience, and insights with us in this final phase of the current iteration of this study. It is an important project. In return, I promise you that next year I will follow my own advice and make this a study that practices what I preach. I think that is a choice many of us are going to be facing very soon indeed.