SHU Design Story by Armon Burton of Curious Solomon

About the Project

Overview: This project involved redesigning the information architecture and improving the user experience of Seton Hall University Libraries website.  The Seton Hall University Libraries support excellence in academic and individual work, enable inquiry, foster intellectual and ethical integrity and respect for diverse points of view through user-focused services and robust collections as the intellectual and cultural heart of the University.

My Role: I was apart of a small team of four and I was responsible for conducting user research, restructuring the information architecture, conducting competitive reviews, and designing low and high fidelity prototypes.

Discipline: User Experience, User Research, Information Architecture, Wireframing and Prototyping

Design Tools: Sketch, Invision, Optimal Workshop, Lucid Chart

Research Methods: Observations, Questionnaire, Interviews, User Testing

Duration: Spring 2018, approx. 15 weeks

Brief Outline of the Problem

The Seton Hall Library website is an important resource for graduate and undergraduate students. It houses hundreds of thousands of print materials and electronic resources for its diverse student body.  Students may communicate with a librarian through the live chat box feature for immediate assistance. They can also search through an array of databases and collections for books and articles. The library website is undoubtedly a vital resource for all segments of the student body. However, the site lacks discoverability due to inconsistency and organization of it’s Information Architecture and Visual Design.

Our Approach to Solve the Problem

Understanding the Users

We started the redesign process with user research with the motive of gaining a deeper understanding of Seton Hall Library users.  We divided our target audience into three different groups and used interviews, questionnaires, and observations to find out more about them.

Methodology

Interviews – An in-person semi-structured interview was conducted with 4 graduate students from Pratt Institute.  Each participant was asked questions related to their background, experience at school, research practice and interactions with libraries.

Questionnaires – The online questionnaire focused on the user’s interest, hobbies and technology/website preferences as well as basic information on their age and position. It consisted of 13 multiple choice questions with 1 fill in the blank and was emailed to a total of 14 users, male and female, between the ages of 18-40.

Observation – The observation focused on how a user interacts with a website through a task based approach. The one on one, in person observation included a total of 2 users and took about 45 minutes each.

Key Findings

The following insights were learned about the our user groups after evaluating the data collected from the observation, questionnaires and interviews.

  • Navigation is a map – One of the first things you may do when you go to a new library is immediately look at the library floor map diagram to find where your favorite genres stacks are located. Similarly, on a website a user expects to be able to easily find what information is available to them and where to find it. Effectively making the site navigation organized and clear can overall improve the what is discoverable for the user.
  • On the go accessibility – Users are on the go more then ever and no longer want to be tied down to a home computer for internet access. The collect data reflect both a strong user preference of mobile devices and laptops over desktops and accessibility on these platforms is extremely important to users.
  • Organization leads to better consumption – Physical libraries are filled with information and opportunities to learn but it is organized in a way that it is accessible and easy to consume for users. The same principle should apply to a libraries digital website counterpart and keeping this principle in mind can make a fun and rewarding experience for the user.
Personas

We developed the following personas based on the information collected during our user research.  It includes their behaviors, interest, pain points, and needs.

Information Architecture

The next step of the redesigning process was to gather insights into our users mental model by organizing and structuring complex sets of information.  To create a structure for the new site navigation we utilized the card sorting and tree testing methods to uncover how users logically organize information and assess discoverability within a hierarchical structure.

Both of these methods were invaluable as they not only tested a proposed site navigation structure but also helped us understand how our potential audience thinks and organizes information. We utilized Optimal Workshop User Research Platform to facilitate the card sorting test and tree testing to a total of 30 users.

Card Sorting

With the card sorting test, 8 users were given 52 cards to group and label as they see fit. The 52 cards were a representative collection of items of information that you would find on Seton Hall University Library website and navigation.

Team Curious Solomon using posted notes to create the site map

 

Site map created on Lucid Chart

Upon completion of the card sort by the users, the results were analyzed by using Optimal Workshop’s Similarity Matrix tool (image below) which show the commonalities in the groupings of the cards by the users.

Tree Testing

With the tree testing activity, we evaluated the site navigation structure created from the card sort by having 22 users find the locations in the structure they best feel would complete specific tasks.  

A participant piloting the initial tree test we created based on our card sorting activity

During this activity we had two rounds of testing with users, the first to test the initial draft of our navigation and a follow up round to test modifications we made to the navigation based on the results from the first round of test. Each round had 11 participants each. This is important to note as the initial site navigation structure was based on the analysis of the card sorting results and the second version, used during the follow up round, contained changes based the direct/indirect success and failures during the initial round. With the second round of testing, we saw our success rates jumped from 47% to 73% with our directness going from 70% to 65%.

Results from the first tree test

Results from the second tree test


Key Findings

The following insights were learned about the our user groups after evaluating the data collected from the card sorting and tree tests

  • Clear and concise task during tree testing – We found some of the questions during the initial round of tree testing to be a little confusing and unclear and because of this it could’ve possibly lead to difficulty completing the task or lead to direct/indirect failures by the user.  Therefore, in the follow up round, we not only modified the site navigation structure but also reworded or created new task to fit the new structure. We found that clear and concise tasks are extremely important to avoid failures that may not reflect bad navigation structure but instead highlight bad directions.
  • It is important to provide context for the users – While performing both card sorting and tree testing, we noticed it was helpful to provide user with additional information regarding the purpose and context of the test. Users were informed prior to starting the test, via email and welcome message, that the cards or navigation was related to information you would find on a library website and navigation menu. By doing this it provided them with more clarity, understanding, and perspective as they worked through sorting activity or completed each tree testing task.
  • Use simple and relevant terminology – As user worked through the card sort and tree test, we realized they were having a hard understanding the language we were using to label our cards or site navigation structure. Because of that, it was difficult for them to complete the given task, such as merge cards into groups or find information with the site navigation. Therefore, we renamed the labeling of the card from library terminologies to more globally understood terms. This helped the users a great deal as they completed the given tasks in card sorting and tree testing.

Competitive Review

In order to make the website effective and visually attractive, it was important for us to assess it’s current and potential competitor websites.  It also helped us to see how they solve similar needs for their users. We selected 5 websites for our competitive review and compared them across 5 dimensions.

Prototype Evaluation

After working on the Information Architecture and analyzing the competitor sites we moved to the design stage.  We started this process by creating three tasks or the user to work through.  We then discussed how we were going to design a low fidelity digital prototype of the desktop version of the website.  We first sketched the interfaces out and then utilized Sketch to create it and Invision to make them interactive.  Finally We individually test the prototype with user who fit our user groups and profiles to get feedback.

Low Fidelity Prototype

During my user testing I had two user work through a set of tasks on two low-fidelity wireframes of the Seton Hall Library desktop and mobile. They were created as black and white illustration and focused on the “big picture”. While the UI elements were shown as boxes, circles, lines and other shapes with minimal text, they gave the users a chance to see and test the basic structure of the potential layout of the website

User Task

At the very beginning of this project we had a meeting with client stakeholders.  They acknowledge there were flaws in their current design and identified areas of need/focus, for instance providing users a better way of contacting the librarian via chat or providing feedback and to find critical information quickly.  We focused our task around these two issues. One for mobile and one for the desktop prototype.

  • Mobile task – You are a teacher and you have a brilliant idea on how the library could help you in your course. You would like to suggest it to the librarian
  • Desktop task – You found a bunch of books but you are not sure about the how many books you can borrow or for how long.


We then discussed how we were going to design a low fidelity digital prototype of the desktop version of the website.  We first sketched the interfaces out and then utilized Sketch to create it and Invision to make them interactive.

Artboard screenshots from Sketch of the Desktop wireframe.These artboards were later animated and tested with Invision.

 

Artboard screenshot from Sketch of the Mobile wireframe.  These artboards were later animated and tested with Invision.

User Feedback from Low Fidelity Evaluation

  1. Use descriptive and meaningful labels to encourage user interactioN
  2. Be mindful of how we group similar features together
  3. Provide pathways for users to navigate through the site via buttons and breadcrumbs
High Fidelity Prototype

The final stage of the project was to design a high fidelity prototype of our redesign for the Seton Hall University Libraries website on both mobile and desktop platforms via Sketch and Invision. We also decided to use similar task as the ones used during low fidelity prototype evaluation.

Artboard screenshot from Sketch.  These artboards were later animated and tested with Invision.

To keep our two prototypes, Mobile and Desktop in sync we created a style guide, based on SHU style guide, for the user interface.  The end product resulted in a consistent design, layout and workflow with a streamlined and understandable information architecture.

Links to the Hi-Fidelity Desktop | Mobile

Conclusion

In conclusion, I learned a great deal over the course of this project.  Ultimately I learned that we should design products with the user in mind.  With user centered design, you can focus the users and their wants/needs in every design phase and by doing that you create a usable and sustainable product that is satisfying to both the user and the designer.  I specifically like the way the assignments and the lectures were taught over the course of the semester.  With each assignment and lecture building on one another and giving purpose and meaning to next activity, assignment, or discussion.  Building this design story, especially, helped me see the full picture and after going through the process, it was easy to make the connection and understandings between all the assignments, briefs, testings, and activities.

As mentioned at the beginning of this design story I didn’t work alone.  I worked alongside three great aspiring UX professionals.  Working in a team can be challenging (if you let it) but can be a rewarding experience.  Like the users we interviewed/observed, each team member brings their own perspective, style, way of doing things.  There were plenty of times during the testing and design phase where a team member pointed out something that I didn’t think of myself.  Such as using a dimension in our competitive review (i.e. accessibility) or giving me tips on their design techniques/esthetic.  Through all the late days and nights, and frustrating moments I’m glad we were able to get to the “Performing” and “Adjorning” stages of Bruce Tuckman’s Group Development.  I think we were able to come together with our own set of unique point of views/skills/experience and built a great prototype the client seemed to appreciate.

Speaking of the client, presenting our final high fidelity prototype was a bit nerve racking as we didn’t know what to expect.  Nonetheless we made it through the presentation and the  client and Professor provided great feedback and asked relevant questions, evening pointing out how they liked some of our ideas and gave pointers/ideas on things we could expand on.  I also appreciate the candidness and format of the presentation i.e. not providing a deck/slides and just demo’ing and Q&A.  It is representative of what may happen in the “real world” and points out that you should be prepared and confident in your design.  Because if you aren’t, why should you expect the client to be.

Ethics and Emotion in Controlled Experiments

Users interest are relentlessly changing all the time and their attention span are as short as they have ever been.  Given this companies are constantly thinking of new ways to maintain engagement with their users.  Sometimes that comes in the form of change with their product and how the user interacts with them.  It’s understood that with every potential change, whether major or minor, testing should always be involved.  Not only to ensure it works works from a technical perspective but also ensure it achieves the desired goal or effect. One effective way a lot of companies are using to ensure proper testing is done is with a method called A/B testing.

According to Optipedia, A/B testing is an experiment where two or more variants of a user interface are shown to users at random, and statistical analysis is used to determine which variation performs better for a given conversion goal. [Optipedia]

Figure 1 – With A/B testing there are two (or more) variants. Variants A (control) and a variant B (variation) are nearly identical user interfaces with the exception of one variation. This variation can be as small as the labeling of a checkout button from “shop now” to “buy now” or as big as the manipulation of a feature functionality.

It’s easy to see how effective A/B testing can be with its controlled test and comparative analytical data however the general practice used by companies raise some ethical questions.  I honestly was surprised to find out how often companies I use on a regular basis, Google, Amazon, Instagram, perform A/B testing on its products.  They are performed all the time and most of the time the users are unaware of it.  User agreements aside, this ongoing testing is essentially done without the users explicit agreement to participate and the lack understanding on the goals and objectives of companies/test and what potential impact the testing could have on the user is troubling.

For instance, in March 2014, Facebook conducted an emotional response test that included  close to a million of its users without their knowledge. During this emotional contagion experiment, the social platform gave some the test users a positive newsfeed while others received a negative newsfeed. The study, published in collaboration with researchers from Cornell University, found that users with negative newsfeeds posted more negative words, and those with positive newsfeeds posted more positive words.  While the results are not surprising, the duplicitous nature in which they performed their experiment and the possible emotional impact it could have on its user is alarming and unethical.  You can find more details on 10 other user experiments conducted by Facebook during 2014 on at Forbes.

In 2012, due to the growing toxic behavior in their community, Riot games tested a new chat feature in their immensely popular multiplayer online battle area (MOBA) game, League of Legends. Their objective wasn’t to eliminate toxic behavior from the games, as that would be near impossible, but to protect players from the harm of the disruptive behaviors from some members. In many online multiplayer games, you are able to chat (text or voice) as you play the game: with only people on your team, party chat or with everyone in the game, cross team chat.  This testing, without the players knowledge, consisted of a variant with the cross team chat functionality disabled by default. The feature was still available the game but the players had to manually turn it on via the menu settings. After analyzing the results from the test, they found a 30 percent swing from negative messages to positive ones even while actual chat activity remained the same.  You can find more details on this test can be found in an archived video on Game Developer Conference’s website.

Even though the goals and hypothesis presented for these experiemnts were possibly met, they both suffer from own ethical issues such as confidentiality, transparency, or user consent.  While not all A/B testing is conducted this way it’s not hard to question the ethics in conducting a test that intentionally presents a frustrating, stressful, and emotionally draining experience is a good practice or ultimately helpful.

Some might argue that testing both “good” and “bad” experiences are essential to completely understand a user’s overall experience and usability research methods, such as A/B testing, tracks along the same ethical lines as a normal product launch. But I’d assert that companies should be more transparent and upfront to their users about their testing and the intentional exposure to a negative and potential harmful experience does little to advance the product or industry and instead halts any progress to the detriment of the users.

References

Optipedia, A/B Testing. Available online: https://www.optimizely.com/optimization-glossary/ab-testing/  [Accessed on March 18, 2019]

American Marketing Association (AMA), Are A/B Tests Ethical?. Available online: https://www.ama.org/marketing-news/are-a-b-tests-ethical/  [Accessed on March 18, 2019]

Forbes, 10 Other Facebook Experiments On Users, Rated On A Highly-Scientific WTF Scale. Available online: https://www.forbes.com/sites/kashmirhill/2014/07/10/facebook-experiments-on-users/#13f0685a1c3d  [Accessed on March 15, 2019]

Game Developer Conference (GDC), The Science Behind Shaping Player Behavior in Online Games.   Available online: https://gdcvault.com/play/1017940/The-Science-Behind-Shaping-Player  [Accessed on March 15, 2019]

Design Critique – MTA On the Go Kiosks

Introduction

The Metropolitan Transportation Authority’s (MTA) Interactive On the Go Kiosks were first introduced in 2015 and over the years they have provided New Yorker’s and tourist with a one-stop station for navigating their way in, out, and around the city.  You can conveniently find directions from your current station to the Empire State building, get real time information on arrival time of the next train or service status information that may affect your travel.

Figure 1 – Picture of the MTA On the Go Kiosk I used to do the design critique. It is located in the 14th Street Station (A/C/E and L trains) near Pratt Manhattan Campus.

Design Critique

To begin, the On the Go kiosks provide good clues of discoverability on the possible information a user can find with this device.  Not only the physical placement, typically in the center of platform, but also the physical nature of it. It is a solid metal structure with a large illuminating screen.  It affords interaction from passersby.  An interesting advertisement may catch a potential user eye as they walks by or approaches the kiosk but as they scan the display they notice three large blue labeled icons in a black box.  The placement, layout, and format of these buttons resemble the apps you would find on your smartphone and therefore there is a perceived affordance of interaction by pushing (tapping the screen).  With this examination of the kiosk, the Gulf of Execution and Gulf of Evaluation become clear on how to operate and interact with it’s digital interface.

Figure 2 below illustrates that these icons are always visible, even when an advertisement is running.  The designers purposely did this to avoid confusing the user into thinking it was just another one of the many digital advertising billboards in New York’s subway.

Figure 2 – Example of an ad by MTA promoting safety and caution when moving between subway cars. As you can see the three buttons for “Mapping & Directions”, “Arrivals”, and “Service Status” are displayed even when an advertisement is running.

Furthermore these three large blue icons or buttons, labeled (in English) as “Mappings & Directions”, “Arrivals”, and “Service Status”, act as signifiers telling the user what information can be retrieve by interacting with the screen.  The images for each button is unique and distinctive: a map pin with an “i”, a clock, and a diamond caution sign with an “!”.  It’s no coincidence these images are similar to other buttons or apps used in other applications such as Google’s Patented Maps Pin icon or the use of a Modern Traffic diamond caution sign.  The use of a similar images which users are already familiar with help them bridge the Gulf of Execution and Evaluation to the their mental models.

In Don Norman’s book, The Design of Everyday things, he mentions that “Designers need to ensure that controls and displays for different purposes are significantly different from one another”.  This same principal has been applied here and the use these “always on display” distinctly labeled buttons support visibility and mappings between the intended actions and actual operations or information in which they will retrieve.

Q: Wondering how long it will take for the next C train or bus to get to the current station?
A: Select the “Arrivals” button to show how many minutes it’s away as seen in Figure 3.

Q: Want to take a detour to see the 9/11 memorial?
A: You can plot your route by pressing the “Maps & Directions” button as seen in Figure 4.

Figure 3 – After pressing the “Arrivals” Button, the next will screen shows the approximate arrival times the next trains to your current station. You also have the option of finding bus arrival times near your current station as well by pressing the bus button/icon.

As far as I could tell the kiosk doesn’t emanate sound despite having what appears to resemble speaker grills at the top and bottom of the display. However it does provide perceptible immediate feedback by quickly transitioning (in less than a second) to a different screen after pressing any button.  For example in Figure 4A, after pressing the “Maps & Directions” button the screen transitions to a zoomed out map of your location/station and with directional pad for navigation.  In addition to the signifying directional pad, there are clear instructions (‘Tap Any Station’) to further inform the user the map is interactive.

A.

B.

C.

Figure 4 – In picture A, at the bottom of the “Maps & Directions” screen you will find an “Points of Interest” option. This button brings up curated list of various NYC landmarks such as Times Square, 9/11 Memorial, or Yankees Stadium. In picture B. you can select one of these landmarks to bring up information and have an option to get directions on how to get there as seen in picture C. Notice each one of these windows have a circled X, which allows the user to go back to the previous screen.

As seen in Figure 4, you will find a new signifier, a circled X button. This button allow a user to exit the current screen and go back to a previous one.  The convention, a type of constraint, of using an X button to close/exit is common practice in variety of applications we use today and this method is consistently used throughout the interface.

The repeated use of the same button icons and functions (map pins for directions, clocks for schedules, or circled X’s to close/exit) with their dominate blue color drives consistency throughout the interface and not only affords but signifies what is interactive and what is not.

Recommendation

A potential option for users is to provide the option to change the language used in throughout the application’s labels, directions, and navigation.  While the icons themselves are similar to other icons used in other applications (ex. Google Patented maps pin icon) they may be initially misunderstood and misused as they could mean different things (mapping/constraint) depending on the cultures. When discussing culture and design in the DOET, Don Norman mentions “What is natural depends upon the point of view, the choice of metaphor, and therefore, the culture”.

A perfect example of culture effecting natural mappings and a cultural constraint is the actions of the Playstation controller buttons. In the Japanese culture X means NO/Cancel and O means Yes/Confirm and the button’s reflect that, in western cultures this layout is reversed.  Having the option to change the language from the default language of English to something else could improve discoverability and understanding of the interface.

Summary

Overall the MTA’s on the Go Kiosks is well thought out and its use of affordances, signifiers, immediate feedback, and mappings presents a strong example of a user-centered design product.  It is easy to use and understand and has a simple design that enables discoverability to a variety of information that will help people navigate their way through the busy streets of New York City.  All these things put together provide a great conceptual model on how it works and what information is retrievable.