By Vivek Bhaskaran
Respondent quality is often talked about by social and survey researchers. Representative sampling and random probability sampling theory is the basis of almost all survey and attitudinal behavior research. However, most researchers overlook something even simpler – the usability and cognitive stress surveys place on respondents.
Cognitive stress – is the technical term given to users of systems, to measure the level of comprehension and understanding of a system and more importantly, the internal apprehension and anxiety humans face when they are confused or unsure about a task. The confusion may arise due to various factors – not knowing the colloquial language, complicated statements where respondents are asked to rate and often times simply ambiguous statements that may not apply to individual survey respondents.
Consider this example;
On a scale where “5” means to a great degree and “1” means none at all, how would you rate the following questions about your company;
1) To what degree do you believe that your top management regards every employee in your company as an innovator, with the potential to produce or contribute to critical business opportunities?
2) To what degree are you encouraged to come up with innovative ideas?
The first question here is loaded. It poses a substantial burden on the respondent to answer that question – and it is very likely that different respondents, or employees in this particular case, will interpret the question differently. Is the question about management valuing employees, or is it about management valuing innovators – but it seems that innovators are defined as employees with the potential to produce or contribute to critical business opportunities – So is it about innovators as defined here?
Now compare this to the second question – which is much more simpler and easier to understand – at least relatively.
The two questions, as similar as they may seem are measuring two different artifacts – one is measuring management’s ethos and other one is measuring management’s practical implementations. The researcher needs to identify these two as separate items – so that when the results of the survey are in, the researcher can make an accurate recommendation to the CEO.
Survey researchers and social scientists have long struggled with measuring cognitive stress. Many times, the survey questions themselves are not validated or tested for efficacy. This is primarily due to cost considerations and it’s assumed that researchers who are creating surveys are experts in this. They are indeed experts in crafting questions that are not biased or not leading the witness, but they are also human!
Now consider this other example;
Where do you live?
The obvious issue here is – does the researcher mean country, city or zip code? This is a much easier issue to identify and solve – its an ambiguous question and increases respondent’s frustration – the respondent at this point is trying to guess – which of the three does the researcher really want! These kinds of issue can be identified easily by having someone else “QA” the survey. But, often times, the real world kicks in – researchers can send survey links to colleagues to validate and QA the survey – but that’s usually done as an afterthought. Moreover, colleagues and friends are NOT a reliable way to assure that the survey instrument does not have any glaring ambiguity issues.
We’ve tried and tested a new model for identifying and measuring cognitive stress in surveys – crowdsourcing usability testers. User Experience and Interaction Designers have used this process very successfully in the web/app design space. Almost all digital design agencies perform some sort of usability testing before presenting concepts and ideas to their clients. In the last few years, as crowdsourcing has gone mainstream, remote usability testing has become increasingly popular.
We can take a page out of that model and apply that to surveys. We can have users record their screens and speak about their experience taking a survey. Strides in technology have made remote user testing and usability testing – where the usability testing subjects are using their own devices and tools and verbally walk you through their experience has made this cost effective and easy to use.
We here at QuestionPro have partnered with TryMyUI to provide such an integrated solution to our clients. TryMyUI recently released their Partner API and this enabled us to integrate Survey Usability Testing directly into QuestionPro.
The screenshots below show how to order Usability Tests from the TryMyUI tester panel.
Conclusion – This qualitative and subjective model to measure and identify usability and cognitive stress, using remote testers and recording their video session, represent a step in the right direction to increase the reliability of the survey data. Data collected via surveys fundamentally represent inputs into a larger decision making process – and as researchers we need to be cognizant about the quality of the data we collect. This process makes it that much more accurate.
References & Further Reading:
QuestionPro FAQ on Cognitive Stress & Usability Testing
Bureau of Labor Statistics & TryMyUI – Case Study