Last week I was fortunate enough to attend the 16th annual IA Summit in Minneapolis, MN. According to its homepage, the IA Summit is “the world’s most prestigious gathering of information architects, user experience designers, content strategists, and all those who work to create and manage information spaces.” The conference was nothing short of amazing–it was as entertaining as it was intellectually stimulating. I came away from this experience with a richer understanding and greater appreciation for IA and the field of user experience as a whole, and I made some new friends in the process.
Although I could elaborate on nearly every session I attended, one in particular stood out to me when thinking about its relationship to usability testing: Jared Spool’s Is Design Metrically Opposed? Spool began by discussing how observations lead to inferences, which then lead to design decisions. He explained how one observation (e.g., the average time spent on a page drastically increases one day) could lead to a variety of inferences that all have drastically different design solutions. So how do we know which inference is correct? The answer is simple: do more research.
The bottom line when it comes to metrics is that the data provided by tools like Google Analytics and Net Promoter can’t tell us why the numbers are what they are. However, Spool explained we already have the tools to help with this and just need to use them to our advantage such as customer journey maps. A customer journey map tells the story from the user’s perspective of their relationship with an organization, service, product or brand, over time. We can produce these through usability tests and user studies and then use them to hone in on exactly what was frustrating and what was delightful about the user’s experience.
To illustrate, Spool provided a case study from a recent project he worked on to improve the checkout process on a major e-commerce website. After creating their customer journey map and seeing the site’s page view data for each step of this process, Spool and his team observed that there was a huge drop between the review shopping cart and shipping information tasks. When they showed this observation to the client they were told not to worry about it and that “75% of all e-commerce sites have abandoned shopping cart issues.” However, the team did continue to worry about it and through user testing in the lab, what they found was that users needed to login in order to make a purchase and often users had trouble remembering their account information. This resulted in users having to go through a number of extra steps in order to still make a purchase–steps that were unaccounted for on the team’s initial journey map. The team then requested the page view data for these extra steps and noticed a steady decline with each task. They then asked for some data that was not readily available: how much money was in these “abandoned shopping carts.” When they finally received it what they discovered was that the total loss in revenue due to the account sign-in issues was around $300 million per year. They took this information to the client and proposed a guest checkout solution, which was approved. After this change was implemented there was about a $300 million increase in the site’s revenue. Coincidence? We think not.
So how did Spool and his team make this success happen? He explains that it was by combining qualitative usability research with quantitative custom metrics. In the example above, it was the custom metric of “unrealized shopping cart value,” not page views, that ultimately saved the day. If the team had just stopped with their initial inferences (e.g., it was normal to lose customers during the checkout process, there were no extra steps between the review shopping cart and shipping information tasks) they would have never gotten to the bottom of the issue and the guest checkout solution never would have presented itself.
Is design metrically opposed? Although Spool never directly answers the question, I would have to say no. That being said, what I took away from this session was that as UX designers we should customize metrics with the end goal in mind and stop jumping from observations to inferences too soon; we should experiment based on our inferences, instead of stopping with the first one. Through research we can turn our inferences into observations which will allow for better design decisions because “observations trump inferences every time.”
- Boag, P. (2015). All you need to know about customer journey mapping. Smashing Magazine. Retrieved from http://www.smashingmagazine.com/2015/01/15/all-about-customer-journey-mapping/
- Grocki, M. (2014). How to create a customer journey map. UX Mastery. Retrieved from http://uxmastery.com/how-to-create-a-customer-journey-map/
- Spool, J. (2015, April). Is design metrically opposed? Presentation at the IA Summit, Minneapolis, MN.