Good Intentions ≠ Good Research: Identifying & combating cognitive bias in usability research

Image credit:

Ignorance is not bliss

There’s an old saying you might have heard. It goes something like this: “fool me once, shame on you; fool me twice, shame on me.” Usually, this is said after a series of betrayals by a friend, a family member, or a loved one. But could it reach into UX research as well?

The fact is, just because you try to do good, clean, ethical research, it doesn’t mean you actually are. We all strive to be ethical human beings, and that includes our professional work. There aren’t many UX professionals with posters hanging on their walls that say: “Pulling one over on users is the key to success.” But if you do see someone with a poster like that, you might want to delete their app. Immediately.

In all seriousness, good UX means you “have a deep understanding of users, what they need, what they value, their abilities, and also their limitations.” (, 2017) Empathy, and the use of that empathy, leads to design brilliance and usability success. However, despite this lofty goal, unethical research happens, and it is more common than we think.  Why is this? Well, with over 160 recorded types of cognitive biases, it’s harder than it seems to have unbiased, objective, and productive data. (Kiryk, 2017) With these many biases out there, we can’t hope that our intentions are enough to combat what I’ll call the “cognitive creep.” We have to proactively fight bias – and the first step is to name your enemies.

The framing effect

Image credit: Pixabay

Two of the more insidious cognitive biases out there that work hand-in-hand are leading questions and the framing effect. The framing effect describes a cognitive phenomenon by which we react differently to the same information depending on how it’s worded. It was first described by Kahneman and Tversky, two psychologists, in a seminal 1981 study that asked people about the impending outbreak of an “Asian disease.” They described a scenario in which this disease would soon outbreak and kill 600 people. Two groups were asked to choose a program to help combat the outbreak.

Group 1 was told that Program A would save 200 people, and Program B would have a 33% chance of saving everyone but also a 66% chance of killing everyone. As you might expect, most people chose Program A.

Group 2 was told that Program A would kill 400 people, and Program B would have a 33% chance to save everyone or 66% chance of killing everyone. Here, most people chose Program B.  (Tversky & Kahneman, 1981)

Here’s the rub. In both groups, the fact is that Program A sees 400 people die. However, because it was framed differently in Group 2, people chose Program B in order to find some positive outcome in the negatively worded options. Literally, the exact same odds, and the exact same scenario, but completely different results because of wording and framing. Crazy, right?

You are not immune

You might have the intellect of Einstein, but I’m sorry to say that the framing effect still works on almost everyone. Even UX researchers. Nielsen Norman Group created a study in which they asked ~1,000 UX practitioners if a search function should be redesigned after providing the same usability test findings in two different ways. Check out what happened for yourself:

Image source: Whitenton, 2016

The frame of “4 users not finding the search function” seemed to be a much larger issue as it focused their attention on the negative findings: poor usability for a small fraction of users. However, when told the reverse side of the story, that 16 users found the search function successfully, more experts agreed that to redesign the website would be a fool’s errand. Now imagine if we had followed the advice of those in the first group, worried about those 4 users? We’d have a majority of very unhappy users. (Whitenton, 2016)

So, you may be saying to yourself by now, “well, this is interesting and all, but I would never frame a question in a way that would skew an answer.” Like I said before, cognitive bias is insidious. If you are overconfident that you won’t fall victim to it, you may not realize you have until it’s too late.

Don’t be like Walmart

Walmart is our UX cautionary tale of the day. In 2009, Walmart jumpstarted Project Impact, designed to reinvigorate their stores into ones that could rival Target’s clean, open appeal. In a survey, they asked users a simple question: “Would you like Walmart to be less cluttered?” (Popken, 2011) As you might imagine, most people answered yes. Who wants clutter anywhere in their lives? The very word is imbued with meaning. And so, Walmart removed 15% of their inventory, removed large pallet displays, and generally created a more open look and feel (similar to Target’s). As a result of a redesign based on an unethical, leading and framing question, Walmart paid the heavy price of $1.85 billion in sales decline. It was an expensive exercise in how not to do usability research and in the power of cognitive bias.


And so, the moral of this tale is as follows: critically evaluate every single question you ask, because cognitive bias could be lurking around the corner. My best advice is to look at your questions and see if you’re desiring a certain response. If you are, it’s probably a bad question. It’s time to unlink good intentions and good research, because the plain truth is that’s not good enough to ensure ethical research. The sooner you realize bias’s “cognitive creep,” the sooner you can plan for it and improve your research. And trust me, you’ll be great.


Kiryk, A. (2017, September 7). Overcoming cognitive bias in user research. Design at NPR. Retrieved from

Popken, B. (2011, April 18). Walmart declutters aisles per customers’ request, then loses $1.85 Billion in sales. Consumerist. Retrieved from

Tversky, A., Kahneman, D. (1981). The framing of decisions and the psychology of choice. Retrieved from (2017, July 10). User experience basics. Retrieved from

Whitenton, Kathryn. (2016, December 11). Decision frames: how cognitive biases affect UX practitioners. Nielsen Norman Group. Retrieved from

Design Critique: Calm (iOS app)

Calm is an app that aims to reduce anxiety and stress through guided meditations, nature sounds, and sleep stories. A wide range of preferences are accommodated: hurried users can quickly launch a 10-minute “Daily Calm,” while users seeking specificity can pursue 30-day programs centered around a particular theme. Named Apple’s 2017 App of the Year on iPhone, a staggering 5-10 million users have downloaded the app. In a field saturated with meditation apps, what makes Calm so successful? Good design.

In Don Norman’s The Design of Everyday Things (2013 edition), he explains that “good design is actually a lot harder to notice than poor design, in part because good designs fit our needs so well that the design is invisible …” (xi) This critique will assess the usability of the Calm app using the design principles touted in his text.

Calm Homepage: Clean Layout & Understandable Signifiers

From the start to end of a user’s experience, Calm minimizes the need for a user to use knowledge in the head.  This simple fact significantly reduces mental strain or potential frustrations with forgotten information, such as account details. Users need to enter a username and password only once, after first downloading the app.

At launch, the designer’s conceptual model matches the user’s mental model, meaning that the system image is a well-designed, intuitive one. Users expect to find a calming experience, and one where they can easily begin meditating, as any gulfs in execution would sour the experience (even moreso for users of this app, who are seeking peace and calm).  The homepage’s minimalist layout accommodates its users, with well-understood icons as signifiers sitting on the perimeter of the screen, and a peaceful nature image taking residence in the center. Busy users can begin meditating immediately by pressing the play button icon on the “Daily Calm,” a common signifier to beginning a piece of media. Discoverability and understandability are clear throughout the app, with the largest signifier on the home screen being the “+” button with the word “meditate” underneath. The “+” signifies that a menu of options will open, and the app does exactly that. Both the goal and the way to get there are clear; there is no gulf of execution.

Calm Player: Intuitive Controls & Effective Constraints

In the player, clean, uncluttered design shows the user essential information: the name of the meditation, a countdown timer, stop/pause controls, and immediate feedback by way of a green circle that progressively grows as the meditation plays. This feedback bridges the gulf of evaluation. For instance, if a user sees that the timer is counting down and the progress circle is filling, but yet they are not hearing the recording, they can eliminate potential reasons for the issue, and check their volume and device settings. To this end, the volume bar on the bottom has clear and intuitive mapping – right to increase, and left to reduce. Lastly, if a user tries to leave before the end of the playback, a dialog box asks whether the user really meant to leave the meditation – a physical constraint designed to eliminate unintentional actions.

Calm Meditation Menus: Exploratory Seeking or Bust

The user enters the meditation menu using the “+” / “meditate” signifier on the home screen. They encounter a scrollbar of themes with which to sort Calm’s library, with even more colorful, descriptive tiles beneath signifying unique programs. When a user selects a theme from the scrollbar, it immediately highlights in purple, providing instantenous feedback to show the selection was made. The ‘X’ signifiers on the top-right of each screen allow a user to escape the current screen at any time, promoting a comforting feeling of control.

When a program tile is finally selected, such as “21 Days of Calm,” the play button signifier is most prevalent, providing great understandability for how to begin. A great example of signifiers working in tandem with mapping are the numbers of days in the program. These are listed horizontally, and begin at a left-indent on the screen, suggesting that the user can scroll right in sequence to see future days, or left to see previous days. Lastly, the currently viewed day provides feedback by way of a white bubble around the current day, providing feedback to the user and an immediate understanding of what day they’re viewing.

Potential Issue & Recommendation:

While the meditation menu’s design supports users who prefer exploration, it does not accommodate other information behaviors. Without a search bar in the meditation menu, it is possible that a gulf of execution could occur if a user prefers to search for a theme by keyword instead of exploration.

A recommendation for implementing this feature is below, in Figure 4, from a competing app called Simple Habit. While Simple Habit’s color theme is perhaps too muted, it places search at the top for discoverability, and even includes potential search ideas.

The Consensus

Calm’s brilliant design manifests itself in four key ways:

  1. Limited knowledge in the head needed
  2. Uncluttered, clear layout of signifiers
  3. Understandable signifiers throughout that eliminate gulfs of execution
  4. Frequent and visible feedback that eliminate gulfs of evaluation

Adding a search bar to the meditation menu would go the extra distance to ensure users with varying information behaviors are accommodated, avoiding any potential gulf of execution. Nonetheless, it is clear to see why Calm is such a beloved app.