A/B testing has long been a tactic for companies evaluating “two versions of a landing page, web page or mobile app feature” (Rawat). The most common A/B scenario involves changing aesthetic details like button size or graphics adjustments and deploying those changes among active users to test their effect. However, the ethical impropriety of major social networks exempt from the federal “common rule” have created a sinister perversion of the A/B test that is deeper, more deceptive, and reliant on implicit rather than informed consent.
Defining C/D testing:
Instead of testing the color, and placement of design objects among active users of an interface C/D experimentation occurs when the “programming code of a website’s algorithm is altered to induce deception with manipulated results. This is a… form of testing, which we call code/deception or C/D” (Benbunan-Fich).
Facebook’s Utilization of C/D testing in the 2010 Elections
One example of a type of C/D experimentation occurred on election day in 2010 when facebook analyzed how social influence and contagion can change people’s voting behavior by intentionally manipulating news feed algorithms to curate different social media experiences for individual users related to “get-out-to-vote” messages. “In the turnout experiment, Facebook’s evidence indicates that social messages showing the faces of friends who voted had a direct effect on increased turnout directly by approximately 60,000 voters”(Benbunan-Fich).
The results were reached by obtained public voting records and referencing them to the profile names of participants they experimented on.
When queries, feeds, or social connections are altered to produce specific measurable effects without proper or clear opt-in from the user the results are far from ethical or comfortable for test subjects.
Facebook failed to:
- Gain consent with user comprehension of participation
- Address privacy issues in regards to the boundaries of online, and offline activity by directly affecting real world outcomes for users without appropriate review, or ethical questioning.
- Outline general goals or motivations for conducting the research to its users
- Provide proper channels or sequence for users who may have faced adverse consequences due to the test
It isn’t a stretch of the imagination to find the potentially damaging/dangerous outcomes of deceptive experiments that manipulate real world behavior, election outcomes, and the psychology/identity of unaware participants. Perhaps more damning is the kind of environment that led to the propagation of C/D testing in the first place.
“Common Rule” Exemptions
The “Common Rule” is a Federal Policy which mandates protections for human subjects who are a part of, participate in, or conduct research. The requirements for ensuring compliance for government funded research institutions include obtaining and “documenting informed consent, requirements for Institutional Review Board (IRB) membership, function, operations, review of research, and record keeping” (Korenman).
The loophole being exploited by companies like Facebook is that much of the funding and findings for this type of testing is proprietary to those companies conducting it, and never released to the public. Nor is A/B, or C/D testing really contributing to larger UX research and can be kept under wraps through “internal” company led ethical reviews. Facebook, had they never published their 2010 election study, would have never faced criticism for it.
What cases like these reinforce is that a company engaging in C/D testing would be better off not disclosing or publishing its testing even though the behavior and lifestyle of its customers are affected by its use.
Is this the type of message we really want to convey?
The onus is on companies to provide transparency, and accountability as it pertains to ethical guidelines. Rather than bury the studies they are conducting through the protections of company property, it is time government mandates are made for special cases like social networks where usability testing is often tangential to manipulating consumer decision making in social, and political arenas (actions that have deep personal impact). “I think part of what’s disturbing for some people about this particular research is you think of your News Feed as something personal. I had not seen before, personally, something in which the researchers had the cooperation of Facebook to manipulate people… Who knows what other research they’re doing” (LaFrance). Gaining informed consent, and participation will strengthen consumer faith in company practices which enhances customer trust, bottom lines, and diminishes the predatory nature of C/D experimentation. Companies need to be held legally/financially accountable for answering:
- Is the manipulation theoretically or logically justified?
- Is a manipulation necessary for my research?
- Could the manipulation be potentially harmful in any way?
- How might our users feel about being studied? (Bowman)
It is time for all of us that engage in social networks to undergo our own “internal review” and ask if a company that engages in C/D testing is reliable enough to regulate itself.
Works Cited/Helpful Links
Bowman, Nicholas. “The Ethics of UX Research | UX Booth.” The Ethics of UX Research , UX Booth , 26 Aug. 2014, www.uxbooth.com/articles/ethics-ux-research/.
Benbunan-Fich, Raquel. “The Ethics of Online Research with Unsuspecting Users: From A/B Testing to C/D Experimentation.” Research Ethics, vol. 13, no. 3-4, 2016, pp. 200–218., doi:10.1177/1747016116680664.
Korenman, Stanley. “Common Rule.” Chapter 2 – Common Rule, RCRH, 2006, ori.hhs.gov/education/products/ucla/chapter2/page04b.htm.
LaFrance, Adrienne. “Even the Editor of Facebook’s Mood Study Thought It Was Creepy.” The Atlantic, Atlantic Media Company, 1 July 2014, www.theatlantic.com/technology/archive/2014/06/even-the-editor-of-facebooks-mood-study-thought-it-was-creepy/373649/.
Rawat, Siddarath. “A/B Testing – The Complete Guide | VWO.” Website, VWO, 8 June 2018, vwo.com/ab-testing/.