Our new GreenBook Directory site is live!
Your #1 strategic guide to consumer insights.
Qualtrics X4 The Experience Mgmt Summit
Brand & Retailer tickets for all IIeX events now start at just $99! Get or give one today!

GRIT Sneak Peek: How the Industry Really Feels About Change

Despite the frequency of talk about “evolve or perish”, and near universal sentiment that the market research is undergoing significant transformation, it seems that the great majority of us are optimistic about the direction the industry is headed.



By David Forbes & Lenny Murphy

It’s become  a tradition to debut some sneak peeks at the results from the most recent wave of each GRIT Study on the GreenBook blog, and here is the first one from the Fall/Winter 2013 report. We have more planned over the next month or so until the report is published, but we wanted to start with one of the question areas that elicited the most respondent feedback during the study: the MindSight emotional measurement module.

We’re always looking for new ways to increase the depth and value of the insights in GRIT, and since the area of understanding the unconscious drivers of behavior is a particularly hot topic right now, we decided that for this latest round of GRIT that we would incorporate a technique to deliver that. We had a few criteria for consideration: it had to work within the structure of a survey, be device agnostic, would not require webcams, and would generate insights validated by a large body of knowledge. That led us to looking at implicit response tools, and since I was familiar with the MindSight product by Forbes Consulting and it covered all the bases outlined, we asked them to work with us. They graciously agreed.

We chose to focus our questions about understanding how research pros feel about change. Not their stated, explicit, rational views but rather the unconscious emotional reactions to changes in the industry. With that goal in mind we built a module that we thought would shed some light on how folks really feel. The results are enlightening and surprising, as you’ll see a bit later in this post.

When we reviewed the respondent comments at the end of the survey, there were quite a few folks who didn’t seem to understand what were trying to do with this type of tool. Perhaps it was due to unfamiliarity with implicit response methods or disagreement with the approach in general, but there was obviously a fair amount of confusion about that part of the survey.

With that in mind, before we dive into the results we thought it would be helpful to provide an explanation of the MindSight technique.  Forgive us in advance if this seems promotional; that is not the intent. We simply can’t explain the approach without talking about it.

MindSight is a proprietary product that uses a bit of applied neuroscience to uncover authentic emotional insight that respondents might not be able to come up with on their own, and gets “under the radar” of the conscious mind and its natural editing functions.

There are 3 foundational elements of MindSight – the Emotional Discovery Window, the MindSight Motivational Model, and the MindSight Image Library.

Respondents are presented with visual stimuli and responses are elicited in a very short time frame (the Emotional Discovery Window) that precludes conscious editing.  This timing is derived from research on the neuropsychology of emotion in response to visual imagery (Damasio, 2010).

 Image 2


Visual stimuli for the MindSight assessment are validated to evoke the nine types of feelings specified in the MindSight Motivational Model (Forbes, 2011). The model organizes nine concepts derived from past research to create a comprehensive “unified model” of motivation.

Image 3


The MindSight Validated Image Library contains hundreds of images that are extensively validated to evoke the feeling of fulfilling (or failing to fulfill) one of the nine core motivations in the Unified Model.

Image 4

In actual MindSight research protocol, respondents are presented with a priming sentence that frames the focus of the exercise, and then asked to do a “sentence completion task” where they complete the sentence by choosing all of the pictures that complete the sentence.


Image 1

So there is our primer on the model behind MindSight. We hope that clarifies things for folks who didn’t get it. Now on to the results!



Despite the frequency of talk about “evolve or perish”, and near universal sentiment that the market research is undergoing significant transformation, it seems that the great majority of us are optimistic about the direction the industry is headed. Reminds me of that infamous Gloria Gaynor song — come on, you know the words…

 At first, I was afraid, I was petrified

Kept thinking, I could never live without you by my side
But then I spent so many nights thinking how you did me wrong
And I grew strong, and I learned how to get along

Image 5



 There are clearly two ways of confronting impending change – one is all about the prospect for achievement and the other is about the prospect of failure.  For Optimists, the emotional expectations of change are seen as opportunities for personal growth.  This growth can take the form of creating success in the workplace – and for being rewarded for that success.  Optimists also see this change as offering distinctive opportunities to distinguish oneself within the industry – by innovating and standing out from the crowd.

Interestingly, for Optimists, market transformation isn’t just about individualistic gains, but also about progress for the research community as a whole.  Optimists see the challenges of the industry as a force that can bring a research community together – creating opportunities to work cooperatively and in harmony.

 Image 6


Image 7


By contrast, Pessimists have a different set of emotional expectations about their prospects in a changing industry.  They are generally insecure about the future – feeling at risk and vulnerable – and have a fear of failure – being defeated by industry change.  In the end, Pessimists respond to the stress and insecurity about the future by retreating (remember fight or flight?) These individuals expect to react to the future by disengaging from the research community – feeling unproductive, ineffective and ultimately disinterested.

Image 8


Image 9


So there is our take (at a high level) of how GRIT respondents feel about change. There are nuances here that we explore more deeply in the report and we correlate them with other measures in the survey to provide deeper context and insights.

The GRIT report will be published in January; stay tuned for more sneak peeks between now and then!

Please share...

20 responses to “GRIT Sneak Peek: How the Industry Really Feels About Change

  1. The point is Lenny, that if I was the Marketing Director and was shown this data I would have been seriously questioning why the study was done. I could have hypothesized this on the back of an envelope in 5 mins. If the only insight is the incidence rates for pessimism or otherwise then I would suggest you could have focused on something much more of interest to GRIT readers. To be honest the problem with GRIT is the design encourages a lot of “yea saying” because many of the questions are about future intent. One area of focus that is now critical is the serious evaluation of the use and effectiveness of mobile surveys. Asking researchers future intent, in the context of the newness of the methodology, leads to over-claiming and hype. A realistic evaluation in terms of recency and quality experiences would have been invaluable, instead of the usual “are you likely to use these … in the future?” type questions The whole social media industry also, in its self congratulatory way, misses the opportunity to obtain valuable data by using such extended timelines in the questioning like “have you used any of these methodologies in the last 12 months” that the results are meaningless. More focused time frames e.g last 4 weeks and frequency would I believe highlight that many of these new technologies are not taking off as fast as the industry is being lead to believe. You should ask readers what topics would be of interest next year. I am sure you would get some great ideas

  2. Funny what you say RE mobile surveys. Mobile surveys were possible 10 years before anyone even wanted to take a survey on their phone (most of us still don’t want to). Market researchers need to focus more on the insights and analytics and a lot less on the shiny new fielding solutions. We have enough good data, new sources SHOULD be an afterthought.

    Twitter is another excellent example of this as I eluded to in my blog post today

    It’s hilarious how market researchers are already talking about using Google Glass for market research even though here are <10,000 users LOL. Take a chill pill, focus on insights instead of innovation and look at the data where over 80% of your customers are, not where you think 8% may be.

  3. Amen, Tom! Whether were talking about “data” in the context of “big” data or (my favorite) “qualitative” data, the method used to collect the findings shouldn’t matter more than the information being collected.

    And as for managing the changing environment in market research, I’m surprised to learn that the pessimists haven’t been screened out long ago – seems like natural selection [aka 2008] would have been enough to discourage all but the most optimistic among us!

  4. Interesting thought about optimists Lauri.

    I may be wrong but every poll I’ve recall seeing throughout the past 6 years seems to show market researchers expecting better for next year. I’m not sure what that means.

    Unfortunately marker researchers don’t seem to be very good economists. So I take that to mean we are talking here about pure emotions. But then I would need to rely on one of my colleagues with a PhD in Psychology to explain to me whether the above data mean that they really are optimistic about the future (and that’s not just a natural human condition) or that it means they are very pessimistic about their current situation, or something else entirely.

    Problem in part may be that it seems some sort of control group is missing from this equation.

    1. @Tom and @Lauri, in hindsight we should have left some traditional explicit question types regarding attitudes towards change as a comparative measure, and perhaps even incorporated some monadic design models to handle different groups, but we made some sacrifices for LOI and expediency. What I think is interesting here overall is that when we look at the explicit questions we used to ask in previous waves, the % of folks who would likely be characterized as “pessimistic” was significantly higher. There is no way to tell if this shift we see now is due to sample changes, new data, or an actual movement in attitude but I think it’s directional at least of some level of acceptance in the industry of change occurring and embracing new opportunities by most.

      @Chris, to your point no, none of this is especially surprising, but we didn’t ask the question to be surprised; we wanted to establish a new measure (The Research Optimism Index?) and try out a new technique while we were at it. Per my previous point, I think it is at least directional for further exploration which I hope we can role into the next round. I’d love to actually get to what is driving this optimism.

      @Tom, I hear you, but disagree that we have enough good data already. Yes, we do have an abundance of data available, but there are new sources emerging seemingly every day and we do need to explore them, as well as be forward thinking about how to harvest it. Plus, just being data collectors/analysts isn’t enough any more; clients are looking for ways to engage, understand & activate consumers holistically. If MR is going to thrive, we best not ignore that we need to be aggressively looking at ALL options, even those in early stages of adoption or development.

  5. At first, I was afraid, I was petrified

    Kept thinking, I could never live without GRIT by my side
    But then I spent so many nights thinking how the questionnaire was wrong
    And I grew strong, and I learned it was all a bunch of crap.

    1. LOL, “Gloria” I am not so sure your new lyrics work as well as the old, but I appreciate attempt. 🙂 So I assume it’s the MindSight module you have an issue with, or is it GRIT in general? I’d love to hear some constructive feedback on how we could make it better, and of course having a super star of your stature join us in conducting the next wave would be great!

  6. So WHO decided that those who thought their business would remain the “same as last year” should be lumped with the “optimists”? Given the natural human tendency to avoid labeling oneself as a failure, many would opt for that hopeful attitude of “things will not get worse….(will they?)” but it is more wishful thinking than being “optimistic” which by definition has to be upbeat. How many of these fence sitters were there and what would be their impact on the “results” such as they are.

    Tend to agree with Chris that the model is a self fulfilling prophecy – if I am pessimistic, then sure I will feel all those things. I don’t really see where it takes us or what help it gives to DO something – “where’s the beef?”. Seems like a lot of pretty pictures that merely define in some more detail the meaning of “optimists and pessimists” . Speaking as a natural born pessimist of course……!

    1. @Bill, this is an implicit segmentation; the questions were primers only – it was the unconscious association with the emotional context of the images and words that determined the groupings. Again, I am not sure this allows us to “do” anything other than have some directional gauge on how industry practitioners are adjusting to changes in the industry. GRIT is as much about experimenting with new approaches as it is about tracking via hard and fast measures and this is one of those cases where we’ll look at what we gained using this model and then figure out how we can make it more impactful in the future.

  7. Hi David and Lenny,

    Thanks for including this module in the research this year. I want to express my hopes for what we will see in the full report and then ask some clarifying questions.

    First, it appears as though you have collected very valuable data on the emotional motivations of research suppliers and clients that goes well beyond just classifying us optimists or pessimists. By linking to David’s motivational framework and cutting the data across other variables that you have in the study, it seems like you could produce a report revealing the motivations of certain “types” of suppliers and certain “types” of clients regarding their approach to the future and potential use of methods.

    For example, imagine if we found that clients considering, but not yet using, gamification scored higher on feelings of “shame” and lower on feelings of “mastery” about the future. This might provide valuable insight for suppliers offering gamification solutions on how to position their product, and how to interact with clients who are interested but reluctant to try. Given those insights suppliers might decide to develop and use case studies during the selling stage demonstrating boardroom acceptance of the gamification approach in order to alleviate fears of “shame” or “embarrassment”. Likewise, suppliers might realize that their reluctance to try could be overcome by developing feelings of mastery, and begin to demand more training and education on the techniques from suppliers in order to adopt the new techniques.

    I offer this as one potential example of the power of the data to bring suppliers and clients closer together by meeting the emotional needs of the buyer. I’m sure the community can and would come up with many other/better ways to analyze and use the data. I would imagine there are many more variables within the study that could provide additional valuable insight for us all. I hope these insights do not stay locked in the database, but rather will be shared for the benefit of the community.

    Second, for clarification, you indicate that the method is implicit, however it is not clear how the technique meets the minimum criteria for being an implicit method. You cite Damasio (2010) in your description of your emotional discovery window, but it seems like you may be referencing his popular science book “Self Comes to Mind”. Can you be more specific? Are you referring to the Rudrauf et al. (2008) article that evaluates the time course of becoming conscious of emotions and feeling states?

    The criteria for an implicit measurement technique is clearly defined in the literature by Nosek, Hawkins and Frazier (2011) as techniques that are NOT “direct”, “deliberate”, “controlled” or “intentional” self-assessments. These criteria are critical for avoiding the can’t say/won’t say issue in research. To truly assess whether processing is automatic, or as some say – System 1 processing, we need to ensure that the measurement techniques meet these criteria. It is not clear from your description, or reference, how a conscious selection of whether an image completes a sentence is indirect, uncontrollable or unintentional, even if that conscious choice is required to be very fast (< 1 second). Can you clarify this for us? It also appears from your description that the images are validated using conscious classification measures – it is very valuable to have the images classified and tied to a motivational framework, but why not use an implicit technique, or even a combination of implicit and explicit, for classification? And lastly, the descriptive exercise at the end of the module, wherein we chose which words best described each picture after the rapid presentation phase, also appears to be a conscious association measure. Can you clarify which components of the process you think meet the criteria for being implicit?

    It's important as we establish new methods in the industry that our use of terminology is consistent and based in science. Again, it appears that you've collected very valuable information on the emotions clients and suppliers have about the future, and I am hopeful that we all can benefit from the data you've gathered, but it is not clear that this data is implicit.

    1. Hi Aaron,

      Great comments and questions!

      We do have deeper data here and in the full report will reveal more of the analysis we conducted using a combination of the other data points and the classifications that emerged using the emotional framework David supplied. That said, we didn’t go as far as you have outlined, and now I’m a bit jealous that we didn’t! Of course the predictive ability to take the segments we have developed and score them in a variety of ways makes perfect sense and perhaps we can look at some type of supplemental “typing engine” model to give the industry a bit more ammunition to use to help drive adoption.

      All of the data will be made publicly available via an online dashboard and everyone will be free to do their own, deeper analysis. Perhaps some enterprising folks will come up with even more useful ways to make the data add impact!

      I’ll let David tackle your second point since he is the creator of MindSight and can dive deeper into the science and models behind it. For what it’s worth, I do view the model as implicit as classically defined and actually think it is very similar to the approach (at least in terms of process if not underlying models) that you so graciously demoed for me in Nashville at TMRE.

  8. Hi Lenny,

    I’m thrilled that the data will be public for analysis. I think there is some very valuable insight on motivations here.

    In terms of whether the method qualifies as being implicit…fortunately we don’t have to rely on belief – there are well established peer-reviewed criteria for defining and developing implicit techniques. What I’m really interested in, and what I think the community deserves, is an explanation of how it meets the criteria of NOT being a direct, deliberate, controlled and intentional evaluation. If it doesn’t meet those criteria, it’s okay, it doesn’t mean the technique isn’t valuable, it just means it’s not implicit.

    In terms of whether this technique is similar to Sentient Prime…I would offer that the ocean and the sky are similar in the sense that they both appear blue to the eye – but their makeup is very different, and they are useful in different ways. We need more information on how MindSight meets the implicit criteria to determine whether it is similar to Prime or any other implicit research technique.

    Again, it doesn’t have to be implicit in order to be valuable. But as GRIT and this blog become more and more important as a resource for researchers on the future of the industry, let’s collectively be clear on what qualifies as a truly implicit research technique. Note that we don’t have to do the work of defining it ourselves – we have two decades and hundreds of peer-reveiwed publications on implicit methods that have already done that work for us.

  9. Hi Aaron,
    I very much enjoyed your thoughts about further analysis and thinking that could be driven by the emotional data collected via MindSight® in this year’s GRIT survey. I’d love to chat directly at some point – perhaps with you and Lenny together, about some of that thinking.
    As to the issue of “Is it an implicit technique “ – I offer the following:
    Firstly, MindSight® clearly requires in the first order of events that the respondent do something intentional. However, this intentional action is limited to a “yes/no” reaction driving a button press decision. Moreover, we execute this stimulus/response cycle in the sub-one-second time frame that we call the “Emotional Discovery Window.” (BTW, the original work used support this concept includes the work of Rudrauf et. al., and the Damasio book cited indeed is “Self Comes to Mind”) Based on this area of work – we would argue that the subject can do very little cognitively beyond evaluating a feeling (from an image stimulus) and making a “yes/no” decision about the fit of the feeling with the priming sentence. We would argue that this method overcomes the “editing/distortion/presentation” biases typically associated with “direct, deliberate, controlled” assessments – which is of course the goal of implicit measurement techniques.
    Second, there is a large element of the algorithm for scoring the MindSight® results that is very much “implicit.” That is, the scores produced in the MindSight® profile are the result of assigning points for images chosen, as well as points for the speed at which each image is chosen. In our validation work early on in the MindSight® development, we selected groups of subjects whose “foot vote” in life would suggest a skew in MindSight® profiles (e.g. Nurturance for Nurses; Mastery for professional musicians, etc.) and targeted the prediction of membership in these groups as a validation exercise, and as a basis for refining the actual scoring algorithm to make it optimally predictive in this exercise. It turned out that, the best prediction from MindSight® scores to membership in the various “motivationally skewed lifestyle groups” was gained by assigning a significant portion of the score for each image choice as a function of the speed at which then image was chosen. This portion of our MindSight® profile is thus the result of formally “implicit” measurement.
    Aaron, I hope this information advances the conversation about MindSight® I’d be more than happy to talk with you directly at some point – oh, and Happy Holidays!!

  10. Hi David,

    Thank you for the additional details on the method. And thanks for the holiday wishes – we’re loving all of the snow here in Portsmouth – I think you guys are near us, did you get much snow in Boston? Given that we’re so close, it would be nice and easy to get together. Perhaps the three of us could create an industry report around how to match the motivations of suppliers and clients – that would be fun! I do think/hope that in the least this discussion will advance the conversation around MindSight and about implicit research techniques.

    I love your validation procedure on career choices. (BTW did you know that people named Dennis or Denise are more likely to become Dentists than people with other names? (Pelham et. al 2002) 😉 ) It’s great that you are validating on an independent criterion variable and showing predictive validity. However, from your description it sounds like you are validating the predictive validity of an explicit attitude measure that has been enhanced with response times and a restricted response window.

    As you describe, the speed of the response is a key factor in accurately predicting motivational groupings. However, it appears that what you are really measuring is the speed with which conscious confirmation of fit comes to mind and is willingly expressed. That qualifies as an enhanced explicit technique (can say and will say), which is very valuable, but does meet the criteria for an implicit test.

    There are a couple key factors here that will hopefully help in evaluating whether a technique is an implicit technique versus an enhanced explicit technique. And I will also try to explain why it matters for market research applications (we’ll see how I do!).

    #1) The term “implicit”, as used in behavioral science, has a very specific meaning. It stands for automatic, irrepressible cognition. Therefore, measures that allow for the repression of an evaluative response (e.g. choosing not to select a picture that represents a category) are not considered implicit. To truly measure an implicit attitude we need measures that do not allow for the conscious control of the evaluation of the object of interest.

    #2) Derived does not equal implicit. A great example of this is choice based conjoint. These studies derive what is important by observing an individual’s choices in trade-off situations. The data is great, highly predictive, and distinctly NOT implicit. The choices are explicitly stated and respondents have control over their expressed preferences. (Note, that you can often enhance these explicit measures by recording the response time for each choice). Regression is another example – we can derive the weight of an attitude evaluation variable in predicting an outcome behavioral variable of interest, however, that does not mean that we have gained insight on the automatic, irrepressible nature of that attitude.

    #3) Response time measures alone do not equal implicit. Measuring response times to a judgment task can provide data that is useful in refining the estimate of the strength of an association, however when that judgment is a conscious, direct evaluation of a target object it is still susceptible to the can’t say/won’t say problem. This is true even in conscious judgments that are made in less than one second. Research has shown that conscious processing occurs as early as 300ms post stimulus presentation (Williams et. al. 2004), and that preferences can be expressed with behavioral measures starting as early as 300ms after the presentation of two products.

    Why does any of this matter? First, I think it’s critical that we get our terminology accurate as we bring these new methods to the industry to reduce the growing confusion. Second, this is more than a semantic argument. We need to clearly define the differences in value that each of these techniques provide for researchers and insights professionals. Truly implicit techniques are accessing automatic, irrepressible cognition, and provide an opportunity for researchers to access different insights on the drivers of behavior than explicit measures. That is, implicit techniques often account for different variance in consumer behaviors of interest than explicit techniques. This means that Insights departments are gaining new knowledge on the drivers of behavior when they use implicit techniques. These are “a-ha” insights moments! That is not to say that enhanced explicit techniques do not also provide important “a-ha” moments – but rather that there will be different revelations from the use of each method.

    In the end, the best approaches are often those that measure the explicit and the implicit in the same study, because the output is a more robust explanatory model of the consumer behavior. If the explicit techniques being used can be further refined with the addition of response times and restricted response windows then we have an even better approach for the industry.

  11. Quick note: it was brought to my attention that there is key missing word in one of the sentences of my last post. The section commenting on whether MindSight, as described here, qualifies as implicit should read:

    -“However, it appears that what you are really measuring is the speed with which conscious confirmation of fit comes to mind and is willingly expressed. That qualifies as an enhanced explicit technique (can say and will say), which is very valuable, but does NOT meet the criteria for an implicit test.”

    -Sorry for any confusion! Hopefully, all of the follow on discussion made it clear that the technique, as described, does not meet the criteria to qualify as an implicit technique.

  12. Hi Aaron,

    Hope you are having a good holiday season. I wanted to get back to you before 2014 rolls in.

    I suspect we might be able to debate vigorously about whether the “can’t say, “won’t say” problem is solved by any particular research protocol (short of actual neural measurement — which we know carries its own set of limitations). I’m not sure if being “implicit” according to your definitions is the only route to gaining access to psychological insights a consumer might not be able or willing to share. I am not in the least convinced that all measures which are not “implicit” by your definitional criteria can be likened to “explicit” attitude items that were the historical path to learning about feelings.

    The MindSight protocol elicits emotional responses from subjects during a period of time in which the emotional reaction of the subject to the stimulus image is at the center stage of neural activity, and reflective, considered cognitive processing (of the kind that can lead to editing) is not yet underway. (Your Williams et. al. example is not what most would call reflective cognitive processing, but more of a recognition task) MindSight presents its emotionally evocative images to subjects in a welter of randomly ordered stimuli whose emotional diagnostic value may later be perceived upon reflection (after the test) but whose significance at time of presentation cannot be reflected upon without time expiring before a response is made.

    MindSight also “aids” respondents in considering a very broad range of emotional reactions to a topic or product that the subjects might not otherwise bring to mind. Whether these subjects “could” hypothetically say these kinds of things about their feelings from the standpoint of your definitional distinctions concerning “implicitness” becomes moot for the clients who are hearing things that they have not heard before, and learning things they have not previously learned.

    It’s probably best to remember that we are in the end applied scientists, and that providing clear and valuable results for our clients — ideally results that might not be available using other methods — is the ultimate measure of our success. Our clients in pharmaceuticals, financial services and consumer package goods are in broad consensus that MindSight gets emotional responses from subjects that respondents in their past efforts have not talked about.

    Aaron, I’m sure that the broad and fertile plains of emotional research hold the promise of acreage for all of us who would practice this kind of intellectual husbandry. I enjoy the challenges of plumbing the depths of consumers’ emotions as much as I suspect you do. And I wish you well in your enterprise.

  13. Hi David,

    Thanks again for the additional holiday wishes. It has been a wonderful year.

    I’m glad that you had the opportunity to emphasize how much value your clients get out of the use of your technique. It should be very clear from my comments that this discussion is not about the value of your approach. I am eager to see the data from the GRIT report, and I think it has the potential to provide a lot of value for the industry on the motivations that are relevant to how suppliers and clients perceive the future.

    What is important for readers of GRIT to understand is the difference between an enhanced explicit technique and a truly implicit research technique. When I say that MindSight, as you have described here, is an enhanced explicit technique, I do not mean that pejoratively. We need enhanced explicit techniques in the industry. We also need truly implicit research techniques.

    The technique that you have described requires respondents to self-report whether an image completes a sentence. That is a direct, controllable evaluation. You have enhanced this direct controllable evaluation by reducing the response window and recording response times. The image is then later assessed for additional associations with words through a self-reported exercise of choosing which words best describe the picture. That is a second direct, controllable evaluation. And, as you describe, those images have been previously categorized into motivational buckets through explicit assessments. That is a third direct, controllable evaluation. You have added a nice validation component of these explicit, direct controllable evaluations on a separate criterion variable (choice of profession), which is giving you evidence of the predictive validity of your enhanced explicit technique.

    When you say you are not in the least convinced that your technique is similar to historical explicit measures, it seems like you may be creating arguments that you can in turn disagree with. It has been very clear that the argument here is not that your technique is the same as a traditional attitude statement, but rather that it is an enhanced explicit technique that does not qualify as an implicit technique. And in the end, there doesn’t seem to be much of an argument about that (at least not a scientific one), given that all of the stages in your technique are based on direct controllable responses.

    It is also important to note that the minimum criteria for implicit measurement tools are not “my descriptions”. Those are the criteria currently published in the behavioral science literature. So if you’re raising the question of whether those criteria are valuable in characterizing implicit research techniques, your argument is with really the current conclusions of the last 20 years of behavioral science research on implicit techniques – not with me in particular. If you are really serious about that argument, I would encourage you to try to publish your technique in the implicit social cognition literature. I think you will find that the reviewers will raise the same objections that I have raised here, and likely much more vehemently.

    Again, this critique doesn’t mean that the approach is not valuable, it does not mean that the approach doesn’t provide unique insight (see my comments in the previous post on how these techniques account for different variance in consumer behaviors of interest), it doesn’t mean that you’re not accessing emotion (emotions are certainly explicit as well), and it doesn’t mean that your presentation of images isn’t a helpful “aid” for people self-reporting their emotions – it simply means that it is not an implicit evaluation, that it cannot be characterized appropriately as a measure of automatic cognition, and therefore, truly implicit techniques will likely provide different insights than your technique – and all of that is okay, we need enhanced explicit techniques as well as truly implicit research techniques to give our clients valuable insights.

    Lastly, if you remain under the impression that EEG, fMRI or other direct brain activity measures are the only way to truly measure automatic cognition, I would encourage you to read the vast literature on implicit social cognition beginning with Fazio et. al (1986;1995) or Greenwald et. al (1998). And I would be sure to include the strong summary analyses of Fazio & Olson (2003), Gawronski & Bodenhausen (2006) and De Houwer et. al (2009) in your literature review. As of 2011, there were over 6,000 scientific citations of this body of work (Nosek et. al, 2011). Obviously, as a science, we’ve been at this for some time. What is not as obvious, but is critically important for #mrx, is that protecting the science when we apply it to business, including the scientific principles of implicit measurement techniques, will determine whether and to what degree these methods have a lasting impact on our industry.

Join the conversation