PureSpectrum - Schedule A Demo
Qualtrics: Here to Help

Ethics = Knowing The Difference Between What You Have A Right To Do And What Is Right To Do

Finn Raben of ESOMAR comments on themes related to ethics and privacy in the time of Cambridge Analytica, including government regulation vs. self-regulation, GDPR, implications for US policy, and the role of trade organizations.

Editor’s Intro: Last week, the ARF co-sponsored a major Town Hall event with GreenBook, focusing on ethics and privacy issues spurred by the recent Cambridge Analytica debacle. I attended and found the discussion both informative and passionate, a rare combination. It could have gone on far longer than the allotted two hours, so I am sure that this is just the start of a longer process. Given the interest aroused, we have two summaries of the event for our readers. This article is by Finn Raben, who heads ESOMAR, and brings his association’s perspective and history on research ethics, including their contributions to the creation of GDPR, to the debate.

Last week (26th April), I attended one of a series of meetings that had brought me from Europe to the USA for a week, namely the ARF Town Hall meeting on research ethics.

I was intrigued to see how this meeting would play out, as it had been prompted by the recent publicity surrounding the relationship between Cambridge Analytica and Facebook, and the numerous allegations of data misuse.

The messaging and publicity surrounding this event had used phrases such as “Call for Data Ethics Standards”, “new privacy rules”, “new standards for consumer data” and “research ethics.” All of these further heightened my interest, as at ESOMAR we have worked hard to develop and maintain an established “currency” of research ethics in more than 50 markets globally (the ICC/ESOMAR Code, adopted and endorsed by almost 60 different associations around the world).  This has led to ESOMAR and its partner GRBN (Global Research Business Network) being invited as one of the key contributors to the design of the GDPR and e-Privacy directives which are now being implemented in Europe.

Scott McDonald (President & CEO of the ARF) provided a refreshingly honest introduction to the meeting, explaining that the alleged Facebook/Cambridge Analytica dust-up had brought home to the ARF a need to develop a code of conduct for their members, which was reflective of the world as it is today.

While I will not review the entire meeting verbatim, there were one or two sessions which raised some very interesting points for me…

Allie Bohm (Policy Counsel, Public Knowledge), made the case for having wider legislative reform governing consumer rights. The basis of her position was the GDPR (General Data Protection Regulation), which she felt showed Europe was leading the way in strengthening the protection of consumer data (especially personal data) in this digital era, and which would have mitigated against the Facebook/Cambridge Analytica event(s) from ever happening.

Two interesting reactions caught my attention here:

The first was an acceptance that the European (human) right to privacy provides a strong platform to develop such legislation, and the second was what felt to me like ‘relief’ that the GDPR would be tested “over there” (i.e. in Europe) without impacting US business. This I found more troubling, as it indicated a lack of awareness that any business that is conducted in Europe (even from afar), must be compliant with the GDPR, as and from the 25th May.

Paul Donato (CRO, The ARF) then followed with a comparative review of codes and privacy policies. This had the potential to be one of the most illuminating and insightful elements of the entire debate/discussion, but the time limit meant that the “meat” of differentiation could not be adequately debated. However,  points that (for me) stood out were….

  1. Where was the explanation/differentiation between Ad-space and Research-space?
  2. What distinguishing legislative principles need to be maintained between the two?
  3. Codes list ethical principles; guidelines provide implementational advice; privacy policies explain corporate commitments, and Terms of Use outline contractual conditions… These are NOT the same, yet all have roles to play.
  4. AdTech “rules” are (by definition) more prescriptive, because historical performance (and self-regulation – particularly of OBA) was evidenced to be weak and insufficient.
  5. Contrastingly, research self-regulation has 70 years of success behind it and is recognized as a “best practice” industry….so what’s the ideal overlap??

For those who are interested, AdAge published quite an insightful article on January 6th, written by Adam Kleinberg. In it he stated:

“The conglomeration of software that constitutes “ad tech” fundamentally represents an extremely valuable set of tools for advertisers. Being able to aggregate and make sense of data is a good thing. Being able to provide personalized and relevant messages to consumers is a good thing. Being able to eliminate inefficiency through automation is a good thing.

But, the lawless landscape in which this technology has emerged has created an environment that has undermined its own potential. It has had a dramatically negative impact on the perceptions of customers and the way they respond to marketers seeking their attention. An absence of accountability and limited transparency has resulted in bad user experiences and decreased loyalty.”

This highlights the key difference between ‘Research-land’ and ‘Advertising-land’, in as much as the former guarantees no return path, whereas Ad-Tech is specifically designed to facilitate that return path through personalized, targeted communication.  This tension alone should warrant further debate and understanding, to enable a more holistic comprehension of what it is that the ARF should and can produce, and what are the “wheels” that they should not “reinvent”.

Following from Paul Donato, came Ben Hoxie of mParticle, with an initial guide to the GDPR, and a subsequent panel discussion on what a code of conduct should do. I found both of these very positive and optimistic discussions, despite a lack of awareness of how other codes currently implement and enforce their requirements.

It was very encouraging that the GDPR was seen as relevant to the discussion,  a clear basis for moving forward, and a facilitator, not a hindrance to the research business. I found it very reassuring that the panel clearly recognized the advantage of setting a “higher” standard than just the legal minimum, and that such standards are essential to maintaining trust in what we do.  I was also very impressed that a discussion of legislation vs. self-regulation considered the GDPR principles as a good place to start.

Finally, in the open forum part of the meeting, some of the more interesting challenges were voiced including what is the ultimate role of the proposed new code? Will it convince Facebook to join, or abide by the rules? What level of self-regulation will the code enable, if any? What enforcement or disciplinary steps will be linked? Do particular industry structure(s) better allow for self-regulation?

There were also interesting points such as does the GDPR’s “right to be forgotten” clash with the First Amendment?

Overall, I left the meeting with mixed feelings. On the one hand, I have great sympathy for the dilemma that Scott and the ARF are in, in relation to ethical grounds for determining conduct. This may indeed best be resolved by establishing a code of ethics, but if so, then that code should not repeat or reinvent what is already working successfully – and I would be delighted if ESOMAR could contribute to that process.

On the other hand, I do not believe that this needs to be a new code for research.
The research industry – while (still!) debating these issues –has made considerable progress in formulating an approach which safeguards its own practitioners, and which can be folded into its existing self-regulatory practices.

Nonetheless, we currently have a unique opportunity to collaborate in the areas where Ad-land meets Research-land.  If we concentrate on where our concerns overlap, then we will be able to put forward a compelling and unified proposition – one which would allow our entire industry to convince all of the digital players to adhere to existing, successful codes, and one which would convince legislators that we are seeking to do “good” by our participants.  What a powerful and very practical statement that would make to those utilizing ill-gotten data.

If you are interested, this link will bring you to a discussion paper that ESOMAR and the GRBN have authored, which reviews the challenges of using what we term “secondary data” (i.e. the type of data that forms the basis of the Facebook/Cambridge Analytica debate).

Please share...

Join the conversation