Internal Review: Ethical Concerns of C/D Testing by Private Companies



A/B testing has long been a tactic for companies evaluatingtwo versions of a landing page, web page or mobile app feature” (Rawat). The most common A/B scenario involves changing aesthetic details like button size or graphics adjustments and deploying those changes among active users to test their effect. However, the ethical impropriety of major social networks exempt from the federal “common rule” have created a sinister perversion of the A/B test that is deeper, more deceptive, and reliant on implicit rather than informed consent.


Defining C/D testing:


Instead of testing the color, and placement of design objects among active users of an interface C/D experimentation occurs when the “programming code of a website’s algorithm is altered to induce deception with manipulated results. This is a… form of testing, which we call code/deception or C/D” (Benbunan-Fich).


Facebook’s Utilization of C/D testing in the 2010 Elections


One example of a type of C/D experimentation occurred on election day in 2010 when facebook analyzed how social influence and contagion can change people’s voting behavior by intentionally manipulating news feed algorithms to curate different social media experiences for individual users related to “get-out-to-vote” messages. “In the turnout experiment, Facebook’s evidence indicates that social messages showing the faces of friends who voted had a direct effect on increased turnout directly by approximately 60,000 voters”(Benbunan-Fich).

The results were reached by obtained public voting records and referencing them to the profile names of participants they experimented on.


Ethical Concerns


When queries, feeds, or social connections are altered to produce specific measurable effects without proper or clear opt-in from the user the results are far from ethical or comfortable for test subjects.

Facebook failed to:

  1. Gain consent with user comprehension of participation
  2.  Address privacy issues in regards to the boundaries of online, and offline activity by directly affecting real world outcomes for users without appropriate review, or ethical questioning.   
  3. Outline general goals or motivations for conducting the research to its users
  4. Provide proper channels or sequence for users who may have faced adverse consequences due to the test

It isn’t a stretch of the imagination to find the potentially damaging/dangerous outcomes of deceptive experiments that manipulate real world behavior, election outcomes, and the psychology/identity of unaware participants. Perhaps more damning is the kind of environment that led to the propagation of C/D testing in the first place. 


“Common Rule” Exemptions   


The “Common Rule” is a Federal Policy which mandates protections for human subjects who are a part of, participate in, or conduct research. The requirements for ensuring compliance for government funded research institutions include obtaining and “documenting informed consent, requirements for Institutional Review Board (IRB) membership, function, operations, review of research, and record keeping” (Korenman).

The loophole being exploited by companies like Facebook is that much of the funding and findings for this type of testing is proprietary to those companies conducting it, and never released to the public. Nor is A/B, or C/D testing really contributing to larger UX research and can be kept under wraps through “internal” company led ethical reviews. Facebook, had they never published their 2010 election study, would have never faced criticism for it.

What cases like these reinforce is that a company engaging in C/D testing would be better off not disclosing or publishing its testing even though the behavior and lifestyle of its customers are affected by its use.

Is this the type of message we really want to convey?  




The onus is on companies to provide transparency, and accountability as it pertains to ethical guidelines. Rather than bury the studies they are conducting through the protections of company property, it is time government mandates are made for special cases like social networks where usability testing is often tangential to manipulating consumer decision making in social, and political arenas (actions that have deep personal impact). “I think part of what’s disturbing for some people about this particular research is you think of your News Feed as something personal. I had not seen before, personally, something in which the researchers had the cooperation of Facebook to manipulate people… Who knows what other research they’re doing” (LaFrance). Gaining informed consent, and participation will strengthen consumer faith in company practices which enhances customer trust, bottom lines, and diminishes the predatory nature of C/D experimentation. Companies need to be held legally/financially accountable for answering:


  • Is the manipulation theoretically or logically justified?
  • Is a manipulation necessary for my research?
  • Could the manipulation be potentially harmful in any way?
  • How might our users feel about being studied? (Bowman)


It is time for all of us that engage in social networks to undergo our own “internal review” and ask if a company that engages in C/D testing is reliable enough to regulate itself.


Works Cited/Helpful Links


Bowman, Nicholas. “The Ethics of UX Research | UX Booth.” The Ethics of UX Research , UX Booth , 26 Aug. 2014,

Benbunan-Fich, Raquel. “The Ethics of Online Research with Unsuspecting Users: From A/B Testing to C/D Experimentation.” Research Ethics, vol. 13, no. 3-4, 2016, pp. 200–218., doi:10.1177/1747016116680664.

Korenman, Stanley. “Common Rule.” Chapter 2 – Common Rule, RCRH, 2006,

LaFrance, Adrienne. “Even the Editor of Facebook’s Mood Study Thought It Was Creepy.” The Atlantic, Atlantic Media Company, 1 July 2014,

Rawat, Siddarath. “A/B Testing – The Complete Guide | VWO.” Website, VWO, 8 June 2018,

Design Critique: Babbel (Iphone App)



Babbel is a language learning Application available for both online and mobile platforms. Developed by Lesson Nine GmbH in tandem with a team of language learning experts the app encourages development of over 14 different languages with 10-15 minute “bite sized” lessons that increase in difficulty as you progress through the program. The developers claim through user confidence, comprehension, and retention their app is proving itself as one of the most effective learning apps on the market.

It peaked my interest initially as a tool that addresses my own yearning for acquiring a new language, and how designers of the platform addressed usability as an essential component of what helps (or hinders) our cognition of new things (ie. If it is clear and simple to use it will not distract from lessons and tutorials).



Figure. 1


The introductory processes within the app were very consistent and easy to navigate as a whole. The initial interface gave clear directives on the type of language I read, and speak during day to day activities with signifiers via an arrow telling me more options are available if click the icon. In the “I want to learn” section of the setup I could see a potential pitfall in the lack of a visible signifier (my recommendation is a scrolling emblem but it could also be options near the bottom loading slightly off the page like in Figure 2.) to let me know the existence of more language options if I pan my finger downward rather than the four which appear initially. I found the second and third stage rather ingenious as a combination of Don Norman’s natural mapping design principle, and the use of implicit iconography as roadways, pathways, point a to point b metaphors are cross-cultural. I knew placing my finger in beginner, advanced, or space between would situate me in a suitable spot within the program.



Figure 2.


In the main hub of the program are diagrams with each course selection, its parts, and the lessons completed clearly labeled and divided chronologically through numeric ordering. The top navigation bar (labeled beginners courses) allowed the user to search through all the learning pathways to select a more appropriate path if their selection from the previous mapping proved to be unfit. A small snag can be seen in the second pane of Figure 2. which labels lessons in the language you are trying to learn; a potential breeding ground for knowledge-based mistakes. In other words, my obvious incomplete knowledge of a language I’m unfamiliar with is a confusing standard for labeling a reference or index. I can surmise the approach is intended to indoctrinate me into becoming more familiar with seeing the language but as a design quality could lead to action-based or memory lapsed-slips or problems recalling, selecting, and retrieving the lesson I want. I would change them to the users native or most familiar language but keep the very familiar checklist format. I did appreciate the the lock icon actively illustrating, and hindering the selection of a lesson I hadn’t paid for. This cultural constraint (ie. lock shape to most cultures means off limits) paired with the immediate feedback from a pop-up declaring “you do not have a subscription”  (for those who would try anyway) means there is an intentional block from something rather than a system error.



Figure 3.


The lessons relied on a series of carefully designed word games, and puzzles. I found them to be pretty straight-forward with clear buttons and jigsaw shapes most users would have a conceptual model for. A good example is the second pane of Figure 3. which used corresponding perforated edges to clue the user into the fact that one must go with the other to make a whole (followed by animation which shows it as well). My design critique however had much to do with composition of elements. It is unnecessary work for a viewer to traverse to the bottom for two buttons (pane 1 Figure 3.) or for the bottom word to obscure/cutoff the shapes you need to select (pane 3 Figure 3.). I enjoyed the consistency of color, font, shapes, and operations however, and the clear prominence of icon enlargement upon selection. A bonus was clear error messages (fourth pane Figure 3.) signified by color changes and shaking when a wrong choice was made. I found these to be efficient behavioral cues ( ie. wont do that again).