Your satisfaction survey says 64% of customers give you a score of 5 on your 1 – 5 scale. Should you fire your customer service manager, or give her a raise?
That answer all depends on the magic word context, which is an important but often overlooked part of research.
We have to deal with context every day in real life. The best player on the high school football team makes the game look easy, until he gets to college and finds that all of his teammates and opponents were also the best players on their high school teams. Is $10 for a burger expensive? It sure is at Burger King, but maybe not so much at a gourmet restaurant in a five-star hotel. The ability to speak fluent Portuguese may be impressive for an American adult, but any six-year-old in Lisbon can do it.
The need for context is so obvious in everyday life; why is it so often overlooked in research?
Of course, it’s not just context, but proper context. Let’s go back to the initial example of 64% customer satisfaction. Context for whether this is a positive or negative satisfaction figure might include industry standards or previous research conducted by your organization. Making sure your context is relevant is even more important than finding some type of context for evaluating your performance. I’ve seen plenty of research companies that offer “industry benchmarks” – but upon further investigation, their “benchmarks” are nothing more than the combined satisfaction ratings from other clients (which are fairly meaningless unless they have served a tremendously broad and representative swath of your industry).
It’s important to understand the context you need before you begin to measure anything through research. If you want to compare your findings to those which already exist, you need to make sure that the findings from the existing study were measured the same way, that the samples are comparable, and that the contextual data is actually available to you. If your survey uses a five-point scale and was conducted among your top tier of clients, comparing it against a survey that used a ten-point scale among typical industry customers will be useless at best; misleading at worst.
As researchers, we tend to remember these issues very well when we’re conducting a tracking study, but not so well when we’re looking for existing data against which to compare what we’ve done. You can’t just “rework” the data and try to match things up if your own survey used a different question, scale, or sample, or if the comparative data was gathered too long ago.
The sample is an important part of this. If you are doing research for a local credit union, is it relevant to compare your satisfaction ratings to Bank of America? Or to banks in general, rather than credit unions? Should your context include all financial services organizations nationwide? All competitors in your market? All credit unions? All small, local financial institutions? There is not necessarily a right or wrong answer, but you must have an answer in order to be able to make sense of your data.
There may be multiple contextual comparisons. You may learn that compared to other small, local institutions, you fare pretty well, but the big banks are eating your lunch. Or you may find that credit unions as a group seriously outperform the typical financial institution, but that you’re getting low ratings among credit unions. Just keep in mind that context is a way of looking at the issue from a number of angles, not a way of choosing the angle that makes your organization look the best.
Here’s an example. In the National Basketball Association, 78% of players (as of the start of the 2010/11 season) were African-American. Twenty-seven percent of the head coaches in the NBA were African-American. About 12% of the U.S. population is African-American. Now, should the NBA be criticized because there are so few Black head coaches in a league where nearly eight out of ten players are Black? Or lauded for the fact that the proportion of Black head coaches is more than double the proportion of the American population that is Black? Or should there be some other standard for comparison, such as how the NBA compares to other professional sports leagues?
Here’s another one. Company A has seen its market share grow from 20% to 25% over the last year, which is an increase of five percentage points, or a growth rate of 25%. Company B has seen its share go from 1% to 2% – an increase of just one percentage point, but doubling its share of the market. Which had the more impressive growth? Which can fairly promote itself as the “fastest growing company in the industry”?
Sometimes, statistical context simply is not readily available. Maybe there are no applicable benchmarks from the industry, and you have no previous studies against which to track. Sometimes, relevant context may be the expectations of leadership.
One exercise I sometimes use with clients is to have leaders provide two answers to each key question before they see the findings from the study. Number one: what do you expect the answer will be? Number two: what would you be satisfied with? Not only does this provide some organizational context, but it can uncover when various organizational leaders are not on the same page. If there are three people in brand management, and one would be happy with 10% unaided brand awareness, one would be happy with 30%, and the third would be happy only at 60% or higher, the organization has bigger issues than just its brand awareness (and you might be surprised how often stuff like this happens).
In our original customer satisfaction example, let’s say the organizational consensus is that 60% or better is the goal, but that leadership is afraid top box satisfaction may be down in the 30% area. If we learn it’s actually 64%, this communicates pretty clearly that leadership does not really have a good handle on how customers perceive the company, and that things are not nearly as bad as they think. But if leadership thinks the figure is about 60% while setting a goal of 80%, this should spark discussions of whether the goal is realistic; and if so, what needs to be done to see that kind of movement in satisfaction scores. Note that this really must be done before the results are released so as not to bias their answers.
Of course, that doesn’t even include the question of whether customer satisfaction is the right measurement for what the company really needs. Do satisfaction scores predict customer retention, or customer loyalty? This context can be provided by tracking what happens with customers who give scores at various levels. Let’s say there are 1,000 customers surveyed, using a five-point satisfaction scale. Matching survey responses back to the customer records, the company finds that one-year retention is 90% for those who rate their satisfaction at a 5, 85% at a 4, 60% at a 3, 40% at a 2, and 10% at a 1. This context tells us that the highest possible satisfaction scores do not appear to be necessary for retention; therefore, the company may be able to adjust its satisfaction goals and target raising 3’s to 4’s, rather than getting limited return out of trying to raise 4’s to 5’s.
Numbers are not really useful in a vacuum; they only make sense in some type of context. The more researchers keep this in mind, and have an actual plan for providing this context in a usable manner, the more important and valuable our research becomes to our internal or external client (and therefore, the more important and valuable you become as the researcher). Context often can be the difference between providing research and providing insights.