CEO Series

June 10, 2012

Updated! Everything You Ever Wanted To Know About Google Consumer Surveys

Four unique interviews with the Google Consumer Surveys team – compiled for your review.

Leonard Murphy

by Leonard Murphy

Chief Advisor for Insights and Development at Greenbook

0

The movement of Google into the market research industry has certainly captured the attention of the MR world, and rightfully so. One of the trends that many in the industry had predicted was the movement of major tech suppliers into the data collection space, and while others such as IBM, Yahoo, Microsoft, and LinkedIn (to name a few) have made forays into this space, none have done so with such a well thought out, focused, and potentially game-changing approach. Nor have any of those other entrants so willingly and openly reached out to the industry. The GCS team have been engaged in an unprecedented campaign of  PR and outreach, which I think speaks volumes about their intention to grow this business unit.

As part of this PR blitz, Paul McDonald, the lead Product Manager of GCS recently conducted two interviews: one with Renee Murphy (no relation, but a great researcher with a great name!) in support of the upcoming Market Research in the Mobile World conference (of which Google is a sponsor) and another with Dana Stanley, Editor-in-Chief of the always excellent Research Access blog site. Renee and Dana cover a lot of ground with Paul and I think you’ll find them very enlightening.With their permission I have reposted both below.

In a similar vein, Ray Poynter and I had Paul McDonald as well as Brett Slatkin, the Product Engineer for GCS, on Radio NewMR last Tuesday. It’s a great interview that covers much ground and you can listen to the interview here.

Also, Paul McDonald and Monica Plaza did a great interview with Brian Tarran at Research magazine; you can find that here.

As thorough as Brian, Ray, Dana and Renee were, there are plenty of more areas to explore with Paul and Brett so with that in mind I was invited by the Google team to moderate a live Q&A session so we could engage with the GCS product team live.

On Tuesday June 12th at 10AM PT I led a Google+ Hangouts OnAir with +Paul McDonald, +Brett Slatkin and other team members about the future of market research and Google Consumer Surveys. You  can follow Google Consumer Surveys on Google+ or watch it right here:

 

Here are the interviews by Renee and Dana. Enjoy!

 Interview with Paul McDonald by Renee Murphy

As part three of the pre-MRMW interview series, I had the privilege of interviewing Paul McDonald of Google Consumer Surveys.  He gets into something quite near and dear to my heart: caring for the folks participating in the research.  Without people that are willing to participate in research, we won’t have a future.  Here’s our conversation.

RM: What are you most excited to share about with us at MRMW and why?  What difference could it make for your audience if they were to implement what you’ll be talking about?

PM: One of the things we think a lot about here at Google is speed. There is a maxim we frequently use that comes directly from Google’s founders, “Faster is always better”. In the market research world, there seems to be a perception that speed comes at some cost, typically quality. It doesn’t have to be that way.

We think about speed in all aspects of our product and it’s really at the core of what we are doing. The initial idea for Google Consumer Surveys was conceived to find a faster way for users to access protected content online (on sites like the NY Times and WSJ that required you to pull out your wallet and pay to get access to that content). We had to find a way that reader could quickly get access and the publisher would still get value from the interaction. To be frank, we stumbled upon market research as the answer but once we realized that answering a question could take less than 10 seconds and that researchers would pay to get these questions answered we knew we were onto something.

With our large network of publishers we realized that not only could readers get access to content quickly, researchers could also get thousands of answers to questions in a matter of hours across a wide demographic.

It’s hard to put into words how large our respondent base could be. Every day on our ad networks we serve billions of ad impressions. If just a small fraction of our publishers implement Google Consumer Surveys we are talking about 10s of millions of unique users per day answering questions. Add in the volume from mobile applications and we could easily reach more than 100 million unique users a day. It puts us in a unique situation to let researchers choose to either get nearly instantaneous answers to their questions or to get a nearly perfect representative sample of a given population over a longer period of time, no matter how small that population might be.

So what does this mean for researchers? Among other things they can:

  • Enable companies to make accurate, data driven decisions in near real-time
  • Stop spending days or weeks getting the questions just right or making sure they’ve asked for exactly the information they need. Instead the data collection can be iterative, adjusting to the data collected to create the perfect survey.
  • Track sentiment and opinion in smaller increments. A brand or politician can make messaging adjustments on a daily or hourly basis in response to feedback.

RM: It can be easy to bemoan the state of market research today.  Instead of us talking about what you’re against in the traditional MR space, I’d love to hear about what you’re for – what you stand for – in the MRMW space.  What makes this something you’re willing to stand up for?

PM: The respondent experience.

It’s easy to overlook the burden research puts on respondents. Instead of feeling empowered by seeing your feedback incorporated into products and services that you use and love, you end up cringing anytime you are asked to complete a survey. We’ve made a conscious decision to stand for the respondent first because ultimately we believe that the quality of the data you get back is direct reflection of the experience you put the respondent through.

By taking this approach we’ve had to make some tough choices for researchers. The premise of our ecosystem is that readers tradeoff their time for access to content. They need to be able to approximate how much time it will take to answer a survey in order to make that trade off so we intentionally limit the number of questions asked to any one respondent in one viewing to two questions. This results in accurate answers as it takes more effort to attempt to deceive us as it does to answer honestly.

We also limit the number of the characters in the question text and answer options to focus researchers on creating clear and concise questions. Similarly we limit the number of answer options show at any one time to make it easy for the respondent to comprehend the question and answer accurately. Finally we limit the question formats, purposely staying away from grids, complicated branching behavior and confusing rating systems. We’ve also introduced new question formats that are interesting for users and useful for researchers. Our image based questions are great for brand recognition tests, design and product comparisons. These questions are fun to complete and simple to understand.

Finally we review each survey for comprehension and adherence to our policies. So users are getting questions they understand and researchers are getting back data that is accurate and useful. In the end we think the respondent experience is the most important factor in quality research even if it means a more limited experience for researchers.

RM: Thanks, Paul.  We look forward to hearing more in July!

Interview with Paul McDonald by Dana Stanley (originally published on Research Access)

DS: You’ve certainly made a big splash with the launch of Google Consumer Surveys. What was your background prior to being part of the Google Consumer Surveys team?

PM: Sure, so I came right out of school and joined Google about nine years ago. And I worked on a whole bunch of different things here at Google. Anything from ads to our commerce initiatives. I was part of the team that started Google Checkout. I also worked on our developer tools, so tools for external developers, and our APIs. I was part of the team that launched Google App Engine. And then I moved back to ads and worked on some optimization things for advertisers. And most recently I was the product lead for Gmail. So I basically ran the product strategy and design for Gmail for about two years before moving off to start this project.

DS: And how long has this project been in the works?

PM: Well, it actually started as what we call a “20 percent project” at Google. A friend of mine, who is an engineer I worked with on Google App Engine, had some ideas around helping publishers monetize their content. And he and I went back and forth a few times and sort of stumbled upon this idea that we could have users spend their time instead of their money for access to content. That was in July of 2010 when the idea kind of first came up.  We put it on hold for a long while because we were both doing other things. I was the product lead of Gmail and had a very busy schedule. So we worked on it on the side. And then around March of last year we kind of decided this is something that we really wanted to look into and do. And we started really focusing on this project.

DS: And why did you decide to enter the market when you did?

PM: I think it was really more about the product that we wanted to create. And what the minimal viable product we thought was to get out the door. Again, this all comes from a different angle than traditional market research. It started with an idea around figuring out ways for publishers to monetize content, because at Google it’s in our interest, of course, to have publishers keep their content free and online for users. And what we saw was a lot of the publishers were sort of packaging up their data, the stories and articles, into applications for phones, mobile phones, and tablets. And at Google we don’t really have access to that data anymore once it’s kind of packaged up and sold to a user that way.

So we wanted to keep the publishers putting their content online. And the only way that we saw that was possible is if we could solve the monetization problem for them. If you look at the major newspapers of the last five or six years, the revenue from the physical paper subscriptions has gone way down. And the online revenue from ads on their online sites hasn’t really made up for that loss of revenue. And so a lot of them are turning to paywalls. That’s like the New York Times and Wall Street Journal are known for these paywalls. And we didn’t really like that user experience. We didn’t like that users had to pay for the content or pull out their wallet. It took a lot of time and it wasn’t always clear what the value you were getting for that money was for some of the smaller sites.

And so what we tried to do is figure out a way that users could spend their time instead of their money getting access to this content. And at first it wasn’t about market research at all. Instead, what we were trying to do was have users do things that computers weren’t very good at doing, human computation tasks is what we called them initially. These are things like labeling images for search results or maybe picking the best set of search results or websites for a given query. Things that it’s hard to do, Google finds particularly difficult to do. We thought we could get users to do it for us, because humans are better than computers.

It turns out there isn’t really much of a market in that, at least right now. And users took a long time to get these things right, and didn’t like to do them. But when we put in some market research questions just to kind of test the viability of them for users on our first publisher, it worked really well and we found that we got really surprisingly accurate and consistent data back from these questions. And so we really sort of focused in that area. And that’s kind of how the idea germinated and became what it is today.

DS: And how did you go about designing the product?

PM: So initially we built a product that we wanted to use. At Google we have a bunch of different products and questions that we have about them for our users. So we started with the groups that do research at Google and asked them, what would you like to see out of a tool like this? And we actually have a lot of market research scientists employed at Google to provide teams with support. And so we relied on them to build a product that they thought was viable in the industry and was usable. And really we tested it out on ourselves for a long time before we opened it up to external companies to try.

And I think at Google there are a couple things that are really important to us. One is speed. We have a maxim here at Google that faster is always better. And we really focus on speed in a lot of different areas, particularly the user experience or the respondent experience. That they could answer these questions very quickly, and get access to the content quickly. And for the researcher, too. We wanted them to be able to create and field surveys and get responses back within hours instead of days or weeks that we’ve seen in other platforms. And we wanted them to be able to go through the data very quickly and be able to analyze it and pull out insights in that data very quickly. So those are the things that we sort of focused on initially.

And I think you’ll see in the product, we made some choices that may seem strange from a market research perspective. For example, we saw that users really weren’t willing to do more than two questions at a time in order to get access to content. So we limited the number of questions shown to a user at any one time to two. Now that also limits us from the research side. We can’t do cross-question correlation beyond two questions. But we also found when talking to a lot of researchers and companies that do research that if you had the demographic data supplied for you already, the cross-question correlation beyond two questions was actually fairly rare. There were definitely use cases, but they were rare in most cases that we’ve seen. And so we thought maybe that was OK. And we decided to go for that.

There are also limitations on the number of characters you can put in a question or the answer options. And we did that because we thought users needed to be able to read and answer the question quickly. And it sort of forced researchers to be more precise, or concise, in their questions and their answers, which led to better questions we found after putting some of those limitations in. And finally we have a limited number of question formats today. We don’t have grids, we don’t have complicated branching logic, we don’t have rating questions with tens of different ratings, or multiple choice questions with tens of different options. We make it pretty simple to do. And again, there are some limitations there from the research side, but we’ve seen a lot of creative workarounds for those limitations. And we think those have led to better questions and more accurate and consistent data from our users.

DS: So how have things been going since the launch?

PM: Surprisingly well. To be honest we had no idea what the market was going into this. And it turns out that not only are the traditional businesses that do market research, the large CPG companies and market research firms and the creative agencies doing research, but also the hundreds of thousands of small and medium sized businesses that we have as AdWords advertisers are also finding value from this because they’ve never really had access to professional market research tools before. It was either too expensive to use or they spent too much time doing the work to get it up and running. And so what we’ve seen is that the small and medium sized businesses are really using this in a way that we didn’t expect. They’re running different types of questions, and trying to solve real business needs for them. And they’re getting the answers back very quickly, within 24 hours, 48 hours, which makes them able to have real data driven business decisions instead of going off of their gut, which is really interesting to see and has been insightful to us.

DS: And what’s been the reaction from the market research industry?

PD: I think more than anything people are curious. They want to know what we’re up to. Why we’re doing it and why we’ve made some of the choices we’ve made. I think a lot of the market research industry sees that they can provide value on top of just the raw results that we provide. And so just like we did with agencies in AdWords, we’re providing a platform to get the data they want back, but the agencies or the market research firms or the folks who are experienced in the industry are seeing ways that they can provide value on top of that data, to pull out the insightful data and really focus on the customer’s needs. We don’t have folks here at Google that will put together research for a customer. We don’t do syndicated research. We don’t do any of that. So really we’re just a platform and a sample provider. And we think we have – of any online provider at least – the most accurate, representative sample of the US population.

DS: And what would you say to the market research industry? Are there areas of collaboration? What message would you send to them?

PM: Absolutely. I think there are ways to collaborate, ways to use our platform to provide additional value to customers on top of the data. Really I don’t see us as much of a competitor rather than a partner. We’re trying to provide an easy way to get accurate data. And I think the researcher’s role is really to understand the customer’s business needs and get that data from any way, any population that they can get a hold of. And I guess I see us as that way for a good chunk of the industry.

DS: Great. And tell me how the targeting and the sampling works.

PM: Sure. So what we do is kind of interesting. It’s based on our ad targeting. So for AdWords and our Display Network, what we do is we cookie users who view ads across our Display Network or for the content network. These are hundreds of thousands of publishers that have ads on Google. And so we can understand the websites that a user goes to, and then create a demographic profile of them in our cookies that the ads can use to target.

So for example we know, based on the sites they’ve seen, with pretty good accuracy, the age and gender of the user. We know the location of the user based on their IP address. And then from the IP address or the location we can derive some other data like whether or not they live in an urban or suburban or rural area, and what the income for that area is. And from there – what we use is this ads data and what we call the DoubleClick cookie. This is the cookie that we drop for all of our ad stuff. We actually use that cookie to infer the demographic data of the respondent as they come in. So in real time, when we see a user, we know their age and gender and location and all these other demographic data.

And so what we do is we score all of the questions that need to be answered in some way, based on the demographics that are needed to answer the question to get to a representative sample. So if the user comes in and is needed, quote needed, to answer one of the survey questions, we serve them the question that is most needed of their demographic.

We do that all real-time. And it’s not really a quota system. It’s rather, because even if we need to serve more, let’s say, 18 to 24-year-old males for this particular survey, it’s not guaranteed that we’ll get a certain number of 18 to 24-year-old males. We just try to score it the best that we can so that we get the most accurate or representative sample as possible.

And then you can see in each of the results. For each question you can see what we call our bias table, which splits out the sample by age and gender and location. And you can see how that compares to the census data, using the US census, and what our bias for that particular sample is. So it’s really kind of an interesting way to target. And we have really three ways to target the population.

One is you can target the general population, so we get a representative sample of the general US population. Or you can target a specific demographic. So you could say males who are 18 to 24-year-old in the West, the Western region which includes California and Arizona and several other states in the West. And so that’s a separate way of targeting. And the final way of targeting is through what we call screener questions. So if you want to target, say, people who own dogs, you could ask them a two-part question. In the first part it would be are you a dog owner or do you have animals or something like that. And then if the person answered yes, or they answered the target answer, in this case it would be yes, we would show them another question from that survey. So you can screen out all the people you don’t want for your survey, which allows you to get at really unique populations of folks. If you want to target, for example, people who watch Hispanic television shows, you could do that. And ask them a bunch of questions. And because we have a very diverse set of respondents, you can almost always get the people that you want. Although some populations may take a little longer to actually survey.

DS: The inferred demographics – that’s certainly an approach that has not been done traditionally. And it is a more limited set of variables than what market research surveys typically have. Do you anticipate moving toward inferring more variables? Or moving away from inference and toward another means of determining people’s demographics? I assume that you’re not looking to add actual demographic questions as is traditional.

PM: Yeah, that’s true. I think we have to look at this from the ecosystem point of view of our product. We have both publishers who are putting these surveys on their sites to get paid for their content, we have researchers, and we have users. Now when a user comes to a publisher’s site and they’re asked more personal questions, things like their sexual preference, for example, they’re less willing to answer those types of questions. And we see that in the response rate. And the publishers are less comfortable with showing those types of questions on their site. Because they don’t really want to upset their users in that sort of way. So we think that inferred demographics, for both the cost and the increase of speed in which you get back the results, is a good trade off for researchers. And we use it to target our ads so we are pretty confident in the accuracy of that data. And in fact we know a lot more about the user then we show in the survey analytics, or the reporting that we showed today. So over time I think you’ll probably see more of those types of variables, including things that you generally wouldn’t get in a typical market research survey like interests or like types of sites they’ve visited. Those sorts of things you can do. You can do things you wouldn’t be able to do without asking a lot of different questions in a typical market research study.

DS: Do you anticipate incorporating some of the data that is available from people’s usage of different Google services?

PM: Not Google services, but rather the sites that they visit on the internet. And again, this is all based on our ads demographic data. And we sort of understand where people have visited online by the ads that they’ve seen. Does that make sense?

DS: So it’s not integrated with any other Google services?

PM: No.

DS: And when you serve questions to different partners for them to display on their sites in exchange for premium content, do you factor in the content of the question, match it, or take it into consideration whether it’s congruent or not with the site content?

PM: No, we actually don’t.  If we were to do something like that, it would end up biasing the results in ways that would not be acceptable to researchers. So up until this point we’ve decided not to do that. Although we may do some experiments in the future along those lines.

DS: OK. So you neither match nor try to avoid a match of content?

PM: That is correct.

DS: And who owns the data that are collected?

PM: The people who pay for it, the researchers.

DS: Do you have plans to help people target populations that are lower incidence than 5%?

PM: We’d like to. The reason that we do 5% or any percentage as a minimum barrier is that there’s a cost to serving the screener questions without getting the answer that you want. So if you serve 10,000 questions and collect “no” instead of “yes” to those questions, then we are still paying out the publisher for those “no” answers. So we basically priced it and modeled the pricing around a way to keep us break even or even lose a little bit of money for a minority of screener questions. And so that’s kind of why we set the limits down lower.

One of the things that we thought about initially was pricing it based on the incidence rate. But it requires some technology that we don’t quite have yet. So in particular, we’d have to bill after the fact, or we would have to run a small number of screener questions to understand the incidence rate before we actually billed. And those are some challenges that we’re working through. So we’d like to go to smaller incidence rates, but at this point it’s not economically feasible to do so.

DS: Do you have any plans to use the existing service that you have or evolve the service that you have to ask people questions that could later be used to target them for a survey invitation? For example, if someone was looking for people that have a particular type of bicycle, you could screen them and then you’re essentially doing something along the lines of what panel companies do right now.

PM: No, we have no plans to do something like that.

DS: Another company that is a large technology company that had some experience in the market research industry was LinkedIn. They entered the market research industry and ultimately decided to exit it. Did you look at their experience when you were planning Google Consumer Surveys?

PM: We did. I mean, we looked at a lot of different companies and what they were doing. In the end, we thought that, based on our relationships with the advertisers that we had, we could make a real market out of this. Or there was a market to be had. And so we decided to enter the market and understand whether or not there is a market for what we’re providing. And so that’s kind of the road that we had taken. Yeah, I mean I think a lot of companies have tried different things, and we’re just another one of those companies trying something new.

DS: Do you have an expectation about other large technology players entering this market?

PM: I mean, of course we’ve thought about, or thought through, what the options are for a lot of these players. Yeah. I think that this data is valuable and I think the other companies will find it interesting.

DS: Let me ask you about a few other possible directions that you could go and see what are your thoughts on those. The field of text analytics, looking at the vast amount of data that’s out there on blogs and websites and inferring sentiment and thoughts from the population out there on the internet, analyzing that text, and delivering insights based on that. Is that something that Google could and/or would be interested in doing?

PM: I don’t think anything’s off the table for us, but what we’re focused on right now is the sort of self-service aspect of this. So companies coming in and wanting to ask users their own questions. I think if we got into doing this automatically for companies or others, it would be more like a syndicated research business. And that’s not something, at least right now, we’re interesting in doing.

DS: Are these questions served on mobile websites, on mobile devices?

PM: Yes. Some of our publishers are mobile publishers. I think mobile is really interesting because of the form factor, of the screens and the phones, which makes it a little more difficult for a respondent to reply to a question that they’ve seen. So I think there’s a lot of work that could be done in mobile to make the experience better. I think the same trade-offs could be made in the mobile space that we’re making with publishers online, on the web. For example, access to an app is very similar to access to content. You could imagine ways for users to choose to pay for apps with their time instead of their money.

DS: And it’s just in English now?

PM: Right now it’s just in English, yes.

DS: OK. And are you looking at expanding that?

PM: Yes, of course. I mean, we started with what we know best here in the US. But, yes, we are looking actively at international expansion.

DS: OK, and have you considered adding qualitative type questions or inquiries in addition to the quantitative question, the standard multi-select questions and the like?

PM: Yeah, of course. I mean I think, again, our focus has been the respondent experience first, more than anything else. And so when you start adding things like open-ended type questions, qualitative questions, the user experience, the respondent experience, decreased a little bit. It takes more effort, more work to do that. So we’re trying to balance the fact that the qualitative research is really interesting and useful to researchers, but is a little more difficult for the respondents to do and do it in an accurate way. And also we want to provide really good analytics and reports on that qualitative data. So we’re trying to think through those scenarios. I guess as of right now, we don’t have any qualitative-type questions, but it’s something that we’re looking into and I expect to have something soon in that space.

DS: Are you considering additional geographic targeting options?

PM: One thing I should say about the geographic targeting. While you can only target a question to a region, once you get the report back you can drill down all the way to the city level, actually state level, in the UI. So you can see state by state answers in the reporting interface. What we want to have first is the sort of representative sample of the US population. It becomes an inventory problem when users target different sub-populations of the larger US. And so what we’re trying to do is balance the inventory that we have and the requests from our users. And we made a decision up front to do only region-based targeting to start. But eventually that will be more granular. And we can go down to a very granular level if we wanted to. And we’ve done several tests of that as well.

DS: How about targeting in other countries?

PM: Yeah. So I think the same thing applies to other countries. Right now it’s only the US, but as we expand internationally we’ll go after the places where the market is the largest, first, and easiest for us, so the English speaking countries. And then as we expand our publisher base, we can get into some of the countries that are harder to target with market research today and are more valuable. Like a lot of the African countries, for example.

DS: What can people expect from Google Consumer Surveys in the next three to six months?

PM: Well, I mean the most interesting thing that I think – we talked about speed a little bit earlier in the interview. And speed is actually important for us from an engineering and product perspective as well. Since we’ve launched, we’ve done four major releases. About every two weeks we’re launching something new. And so I think in the next three to six months, what you’re going to see is that continue. Lots of new features and releases, things for both researchers and respondents. We’re really concentrating on things that Google is really good at, pulling out interesting bits of data from large data sets, doing really fast and responsive UI that makes it both simple and quick for researchers to get the data that they want, and new question types to fill out the formats that we have today, I think, are the things that you’ll see coming up fairly soon in the future.

DS: Paul McDonald, thank you for your time today. I really appreciate it.

PM: It’s a real pleasure talking to you again. Thank you very much.

0

googlerespondent experiencesurveys

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

More from Leonard Murphy

The Next Wave of Disruptive Technology that Changes Everything

Research Technology (ResTech)

The Next Wave of Disruptive Technology that Changes Everything

There have been a few big inflection points of societal disruption driven by technology in the last 50 years: One was the introduction of the Internet...

Leonard Murphy

Leonard Murphy

Chief Advisor for Insights and Development at Greenbook

Quantifying the Impact of Insight Innovation

Insights Industry News

Quantifying the Impact of Insight Innovation

We previously announced the milestone of our Insight Innovation Exchange (IIEX) conference series’ 10th anniversary, celebrating a decade of identifyi...

Leonard Murphy

Leonard Murphy

Chief Advisor for Insights and Development at Greenbook

How Generative AI is Changing the Research Industry

The Prompt

How Generative AI is Changing the Research Industry

ChatGPT (GPT meaning “Generative Pre-trained Transformer”) is a chatbot launched by OpenAI in November 2022. It has quickly exploded in public awarene...

Leonard Murphy

Leonard Murphy

Chief Advisor for Insights and Development at Greenbook

Celebrating 10 Years of Innovation with IIEX

Insights Industry News

Celebrating 10 Years of Innovation with IIEX

Editor’s Note: The following is a joint statement from Lukas Pospichal, GreenBook’s Managing Director, and Lenny Murphy, GreenBook’s Chief Advisor for...

Leonard Murphy

Leonard Murphy

Chief Advisor for Insights and Development at Greenbook

ARTICLES

Moving Away from a Narcissistic Market Research Model

Research Methodologies

Moving Away from a Narcissistic Market Research Model

Why are we still measuring brand loyalty? It isn’t something that naturally comes up with consumers, who rarely think about brand first, if at all. Ma...

Devora Rogers

Devora Rogers

Chief Strategy Officer at Alter Agents

The Stepping Stones of Innovation: Navigating Failure and Empathy with Carol Fitzgerald
Natalie Pusch

Natalie Pusch

Senior Content Producer at Greenbook

Sign Up for
Updates

Get what matters, straight to your inbox.
Curated by top Insight Market experts.

67k+ subscribers

Weekly Newsletter

Greenbook Podcast

Webinars

Event Updates

I agree to receive emails with insights-related content from Greenbook. I understand that I can manage my email preferences or unsubscribe at any time and that Greenbook protects my privacy under the General Data Protection Regulation.*