Two ways to measure users’ emotions — survey and testing



I will discuss why users’ emotions matter in designing products and methods to measure and analyze users’ emotions to help achieve a better user experience. The research focuses on the Facial Action Coding System (FACS) and how it can be useful in finding out the insights while the users do the test.

Why should we care about users’ emotions?

From Norman’s Three Levels of Design, we know that we can never ignore the emotional feedback from the users. Norman indicates that visceral and behavioral levels are subconscious and the home of basic emotions. The reflective level is where conscious thought and decision-making reside, and the highest level of emotions (50). Positive emotional experiences with an interactive product are assumed to lead to good user experience and, ultimately, to product success. Positive emotions are mostly related to good user experience and negative emotions to low usability (1). Thus, if designers want to design good products, they need to consider users’ emotions while designing products.

Methods to measure users’ emotions


  • Paper-based survey. The tester can print out the survey on the paper and have the participants fill out the survey. However, it is hard to collect all the data from the paper if there are a lot of participants. Thus, more and more people are starting to use the computer-based survey.

  • Computer-based survey. The testers can post the survey and distribute the link to the participants. A good example is Qualtrics. After all the participants finish the survey, the online survey program can output the data with a pie chart or bar chart, which is easier for the tester to analyze the data and get insights.


Testing is based on the Facial Action Coding System (FACS). It is a system to taxonomize human facial movements by their appearance on the face, based on a system originally developed by a Swedish anatomist named Carl-Herman Hjortsjö. It was later adopted by Paul Ekman and Wallace V. Friesen and published in 1978 (Wikipedia contributors “FACS”). A common misbelief is that FACS is related to reading or detecting emotions. In fact, FACS is just a measurement system and does not interpret the meaning of the expressions. The scientists separate the facial muscle movements into three basic categories: Main Action Unit, Head Movement Action Unit, and Eye Movement Action Unit. They use the Action Unit as the unit of measurement and number each Action Unit. Each number of the Action Unit represents a facial muscle movement. For example, 1 represents the inner brow raiser.


There are many application fields for FACS. With facial expression analysis, we can test the impact of any content, product or service that is supposed to elicit emotional arousal and facial responses. The most prominent research area which is related to our future careers is Software UI & website design. Ideally, handling software and navigating websites should be a pleasant experience – frustration and confusion levels should be kept as low as possible. Monitoring facial expressions while testers browse websites or software can provide insights into the emotional satisfaction of the desired target group. Whenever users encounter roadblocks or get lost in complex sub-menus, you might certainly see increased “negative” facial expressions such as brow furrowing or frowning. Those are where the designers need to work on.

Emotion research lab

The emotion research lab is a company who offers a service that can help testers to record users’ facial expressions and translate the expressions to an analysis report. Two modes can be applied: online and offline.

For the online mode, the users can use the application to do the test online, while the built-in camera keeps recording the users’ facial expressions. The software will generate a report by using its complicated algorithm to analyze the emotions behind the expressions.

For the offline mode, the participants do not need to be present in the lab to do the test. They can stay at home and use a camera to record the facial expressions while doing the test and send the recorded video to the lab later so that the report can also be generated.

From the report, the tester can identify the tasks in the test that generate more positive emotions, and the tasks that cause the greatest degree of rejection. What is more, the tester can also see the analysis of the emotional impact of the stimulus in different segments of the sample (“What information is obtained”).


Norman, Don. Interview by Gareth Von Kallenbach. Skewed & Reviewed, 27 Apr. 2009, Accessed 15 Aug. 2009.

Kujala, Sari, and Miron-Shatz, Talya. “Emotions, Experiences, and Usability in Real-Life Mobile Phone Use.” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems – CHI ’13,

Wikipedia contributors. “Facial Action Coding System.” Wikipedia, The Free Encyclopedia. Wikipedia, The Free Encyclopedia, 19 Mar. 2019. Web. 16 Apr. 2019.

“What information is obtained.” User experience. 2018. Accessed 16 Apr. 2019.

“Facial Action Coding System (FACS) – A Visual Guidebook.” IMotions, 18 Feb. 2019,

Design Critique: Beauty Plus App(IOS)

Introduction: Beauty Plus is a camera application produced by Meitu, Inc. This application allows people to adjust their photos in a fast and easy way. Users can change the tone, contrast, filter of their photo. They can even beautify their photos by covering their acne, black circle, changing their body size — become taller or slimmer, modifying their facial features — magnify the eyes or take the weight off the face, and so forth.

                                      Figure1                                                                 Figure2

First of all, the discoverability of the application is very strong and obvious. There is a big camera image button at the bottom of the home page(Figure1). It shows me that this is a camera application and I can take photos with it. After I press the camera icon, the window shows up(Figure2). The big circle under the viewing frame is visible and the function of the circle is demonstrated clearly under it. Besides, the circle is also a good signifier which provides clear meaning that I can press or touch it to take photos. What is more, the small house icon on the upper-left side is also a good signifier which shows that I can return to the home page if I press it. The icon next to the house icon shows a good constraint that I can only adjust the frame size of the photo through this button. When I press this icon, the viewing frame changes accordingly, which gives me good feedback. Last but not least, I can press the different words under the circle to change the function at the bottom of the screen, which brings a good mapping. When I press the left word, the mode change to video recording and when I press the right word, the mode change to photo shooting.

                                   Figure3                                                                 Figure4



Figure5                                                                Figure6

Overall, the design of this App is good and user-friendly. However, there are several bad designs in the application. First, there is no description or explanation word under the icon on both sides of the photo shooting button(Figure2). I cannot figure out the function of the two icons intuitively. Actually, the icon on the left allows me to add stickers or cartoon images to the photos and the icon on the right allows me to change the filters. This design is a bad signifier,  which cannot offer me a clear guide or tell me what operation I can take after I press the icon. To improve it, the designer just needs to put the function names under the icons, for example, “Stickers” under the left one and “Filters” on the right one(Figure3). Second, the film icon is a bad design(Figure4). In our conceptual model, the general meaning of a film icon means the users can record a video by pressing it. However, in this application, by pressing the film icon, I will take a photo and add the film filter to the photo instead of making a film. What is more, the studio function is a minor function. It only allows me to change six filters. To improve the function and connect the action better with its result,  the designer can move the studio function under the shoot function as the secondary selection, which allows me to I add filters to photos and make the photos look like the frames of the film. Third, the icon on the upper-right corner is a bad signifier. It looks like after I press the icon, the application will try to update or sync data(Figure5). But in fact, the function of the icon is to switch between the front and the back camera. To improve it, the designer can embed the original icon into a camera icon to make the function more recognizable(Figure6).

Generally speaking, this App is not the same as the conventional camera. It contains technology and algorithm which can help the users to adjust their photos in an easy and convenient way. For example, a user wants to post a photo on her social media App(Goal). She takes a photo and comes up with the idea “I want to make my photo looks better(Plan). My eyes can be bigger and I will brighten my tone of the face”(Specify). Then she opens the App finds the function and adjusts the size of her eyes and the tone of her face(Perform). She sees the result and she is not satisfied with it(Feedback from the screen). She makes her background color darker,  which makes her figure stand out better(Compare with the previous result). Then, she posts the photo happily. These are the stages that Don Norman mentioned in his book and I think they clearly point out a user’s experience when she/he interacts with the world.