Anti-Social Media: Designing for Accountability

 

One of the key issues facing social media today is the prevalence of antisocial behavior. This makes user experience worse, and in many cases, it is the site’s design that unintentionally encourages this behavior.

What exactly do we mean by antisocial behavior? There are three key points that exemplify this issue:

  1. Dissuading users from connecting to people different than them
  2. Shielding users from disparate viewpoints
  3. Failing to prevent toxic in-groups and the spread of harassment

The problem is that most social media platforms currently operate using these strategies:

  1. They suggest friends/connections based on who you know.
  2. They suggest content based on the content you already consume.
  3. They ban users individually based on reported transgressions, using either an algorithm or a group of moderators.

This has resulted in the following unforeseen outcomes.

  1. The creation of entrenched “in-groups” and “out-groups,” most of whom are pitted against one another.
  2. The forming and strengthening of social media “bubbles,” which keep users from encountering viewpoints that challenge one’s beliefs or views.
  3. The reinforcement of an us vs. them mentality towards authority figures, especially by those who have been censured by moderators.

How can we remedy this?

Clearly, the current methods to curb antisocial behavior are either failing or producing unwanted side effects. This is because the social media communities being created lack accountability.

The appearance of accountability is there: users who break community rules are banned or suspended, but this does not force them to change the way they act towards the out-group. Instead, the out-group (and the community as a whole) should be able to see and rate users in terms of their produced content. This can be done via the implementation of three design changes:

  1. Create tracking metrics for users that represent how helpful their comments/content are to the community at large.
  2. Make these metrics visible to the community, so that every action the user takes is visible to both the in-group and out-group.
  3. Using these metrics, the site should be able to determine when in-groups and out-groups form.

Even with these changes, there are ways the site designer can take a more proactive approach to combat the in-group/out-group mentality. For example, the websites can suggest connections to people who are “one demographic off,” so to speak. This means recommending those with the same job, but who are of a different generation. Or perhaps those who have the same political views as you, but live in a different country. These metrics already exist in most ways — that’s how social media suggests you similar people right now.

The site can also promote content that, while created by a different in-group, has been shown to be popular with the community as a whole. For example, perhaps a post made and shared heavily in the tech community could be promoted to those working in other fields, if non-tech users react positively to it.

For some social media platforms, it’s largely about the bottom line, and these changes seem costly and time-consuming to enact. What does the company gain? For one thing, the public relations boost that these changes would create is enormous. This good PR would attract a wider user group, especially from minority and marginalized groups, since the likelihood of toxic communities and harassment is lower.

Ultimately, these change can result in a more diverse, pleasant, and expansive social media community. And isn’t that what social media is all about?

 

Image credit: http://sloanreview.mit.edu/content/uploads/2013/03/cfos-anti-social-tendencies-may-be-changing-1000.jpg

Design Critique: Goodreads (iOS App)

Photo 1: My Books Screen

This design critique is for the mobile iPhone application for Goodreads.com. Goodreads is a site that helps users keep track of which books they have read and those they want to read. This critique focuses on four different attributes of the app: the scanning feature, the search function, the “want-to-read” button, and duplicate editions.

Photo 2: Scanning Screen

The scanning feature is a feature on the app that utilizes the iPhone’s camera to scan a book’s ISBN number. This allows the user to circumvent any searching and immediately find both the book and the correct edition in one action. As soon as the barcode is read, the corresponding book pops up on the screen, giving the user instant feedback. If the barcode was only partially scanned, for example, the user can quickly recognize that the wrong book has been displayed. Additionally, a message appears if the scanner doesn’t have enough light to operate (Photo 2). This message provides feedback that clarifies what the problem is for the user’s benefit instead of causing them frustration.

Photo 3: Main Search Screen

Photo 4: “Search My Books” Screen

The Goodreads app has built-in affordances for two different search features. The main search feature is accessed via the center button in the footer of the app, and is clearly signified by a magnifying glass icon. This search bar checks the entire Goodreads database for matches (Photo 3). However, when on the “My Books” screen, the search bar at the top is not the same (Photo 1). This search bar is prominent, and appears in the same place, but it only searches the books that are already on the user’s shelves. When selected (Photo 4), both search bars bring the keyboard up and are only differentiated by the words “Search My Books” as opposed to “Search” (Photo 3). This has led to users searching only their books when they meant to search the entire catalog and vice versa. One possible way to mitigate this confusion would be to change the catalog search screen to say “Search Goodreads” or “Search the Catalog” so that it is clearer to users which action they want to take. Another solution is to move the “Search My Books” bar to the bottom of the page, so that users can associate it with a different place on the app.

Photo 5: “Want to Read” Button

The home page of the app is where users can see what their friends are reading (Photo 5). This feature is available so that users can easily add books they find interesting to their shelves. To facilitate this process, the “Want to Read” button is made as discoverable as possible. It is bright green on a mostly tan page, and has clear signifiers to tell users that it should be clicked. If a book is already on the user’s shelf, the button will be depressed, and the user’s rating will appear. This design is based at first on knowledge in the head, because users have to learn to recognize a green button as a way to add a new book to their shelves. Despite this, the design is intuitive enough (using green as a signal to take an action) that users will soon shift to using knowledge in the world.

One major issue with Goodreads is the ease with which users can add a book to their shelves multiple times. While the app shows users if a book is on their shelf when they come across it, this only extends to that specific copy of a book. If a user finds another edition of a book, they may accidentally add it to their shelves without realizing the error. These are frustrating because they are mistakes, rather than slips. If it was simply a case of hitting the wrong button, the user would realize and quickly correct the error. Instead, users have to use the “find duplicates” feature, which helps them tidy up their shelves after the mistakes have been made. This could be rectified by changing the UI to notify users when they are about to add a different edition of a book to their shelves. A confirmation window would be a relatively simple way to stop users before they commit to the action.

The Goodreads app is one of the least frustrating and smoothest apps to use on the iOS. It mimics the feel of the website while providing features that use the full potential of the iPhone.

Design Story: The Intrepid Museum

About the Project

We were tasked with completely redesigning the Intrepid Museum’s website. After some research, we found the current site to be severely lacking in its ability to be aesthetically pleasing while providing users with easy access to information.

Current Intrepid Home Page

My Role

One of the main roles I took on in the group was as a communicator and editor. Since my groupmates all learned English as a second language, there were occasionally more technical or specific UX concepts that they were familiar with (in most cases more so than I), but had trouble communicating in English. In these cases, my writing background was perfectly suited to helping them communicate these ideas both to the group as a whole and to help refine the wording for user tests and tasks.

In terms of tangible creations, I wrote and created the first set of cards for our card sort, developed the six tasks we used in our tree test, and created the tasks that we used for our first and second set of digital prototype tests. I also created the task flow that was delivered as part of the final prototype.

Initial Design

Our initial design was heavily based on the results of our card sort. The card sort results pointed towards seven top-level navigation tabs: Events & Programs, Exhibits, Join & Give, About, Shop, Video Gallery, and Plan Your Visit. Several of these tabs had second-level navigation options, based on both the card sort and our notions of what were related. We did deviate from the card sort in one respect: the “Events” and “Programs” groups were deemed too similar to be split, so we put their content into one heading so that users wouldn’t have to choose immediately.

Since these were the most-used headings, we assumed that our users were in agreement that this set of options would form an understandable and intuitive information architecture (IA).

Decision 1: The Education Tab

Our tree test involved six different tasks to test our information architecture. Task three quickly revealed a large problem with our hierarchy. We asked our users where they would find information regarding scheduling a tour for a school group (see Picture 1). The correct tab was Tours (found under the Plan Your Visit heading), but the majority (eight of twelve participants) chose Group Visits (also a sub-level of Plan Your Visit). A ninth user chose a different incorrect path. After this, we decided to make our first major change to the IA: creating a top-level “Education” tab to try and prevent this confusion in the future. Under this tab would be three sub-levels, as seen in Picture 2.

 

Picture 1: Tree Test, Task 3

 

Picture 2: Top-level Education Tab.

This top level Education tab remained throughout our IA, until we conducted our first set of user testing for our digital prototype. Task one was to find an appropriate event for an elementary school class to attend. The ideal task flow was to select Education, then Educational Programs, and then select a single event. Our users instead went to the Events & Programs tab. As a result of this, we decided to divide Education up: Tours were moved back under the Visit tab (changed from Plan Your Visit), and the rest was moved to a second-level Educational Programs tab, with sub-levels that can be seen in Picture 3. We believe that most of the confusion arose from the fact that Education is too general a term for users to intuit what content is under it.

Picture 3: Revised Prototype

Our second round of prototype testing was much more successful, and appeared to confirm the placement of the new Educational Programs tab. Ultimately, we learned that there is a significant difference in whether the users are creating headings based on content versus users knowing what content is there based on a heading. Though users agreed that certain events could be grouped under the word Education, they could not reliably interpret what would be found under it when it was presented to them.

Decision 2: The Calendar

Our initial plan for the calendar was to put it on the Events & Programs page. This was based on the assumption that the calendar would primarily be used to discover which days had events and programs occurring (See Picture 4).

Picture 4: Events & Programs with Calendar

This idea was supported by our card sort, in which the calendar was closely linked to events and programs. While we kept the Calendar under that same tab, we eventually moved it to its own page. This made it easier to find, as it now had a designated second-level heading that users could see in the hierarchy. As for the design of the calendar, we felt that a rotating style would serve better on mobile. The Intrepid’s user base being slightly older, a small grid calendar could prove frustrating when users were trying to select specific days (see Pictures 5 and 6).

Picture 5: Desktop Calendar

 

Picture 6: Mobile Calendar

Ultimately, while we were tempted to simply use the calendar format that is the most popular, we were able to innovate and come up with an alternative that helps to set our design apart without compromising usability.

Decision 3: Drop-Down Menus

Our first design for the desktop prototype involved hovering drop-down menus that displayed the content available under each of our top-level tabs. We believed that this would allow users to see the options for each tab without having to navigate to any new pages. During our first round of prototype testing, however, several users complained that the drop-downs actually forced them to decide between the second-level options.

For example, one of the tasks in the digital prototype testing was for the user to find parking information. This information was located under the second-level Plan Your Visit tab, under the top-level tab Visit. Users, however, were unable to see this without selecting Plan Your Visit, and so some instead selected the About Intrepid Museum second-level navigation, which is located under the top-level tab About (See Picture 7).

Picture 7: Site Map

To correct this issue, we removed the drop-down menus from the desktop version, instead creating single pages for each top-level tab. This way, when a user selected Visit, they would be able to see all the third-level options, but could still easily navigate to a different top-level tab (see Picture 8).

Picture 8: Visit Page

We thought we were saving users time by providing them a look at the second-level tabs via the drop-down menus. Instead, we were forcing them to make decisions without being able to see the content, and a wrong choice would actually result in more time and more clicks than if we’d simply sent them to a second-level page. It was as if we’d removed the top-level tabs (and their intuitive meanings) and forced users to click through three times as many options to find the information they needed.

Decision 4: Hamburger Menu

Our group was split on how the top-level navigation should be presented on our mobile prototype. My half of the group wanted to use the “hamburger” menu, and the other half wanted to find an alternative solution.

The arguments in favor of the hamburger menu were that it was recognizable and would allow us to save a significant amount of space on the screen. If we were to provide our full menu, it would take up nearly the entire home page on a mobile phone (See Picture 9).

Picture 9: Mobile Prototype Menu

The counterarguments were that the hamburger menu is not intuitive and forces an extra action from the user to see the actual menu options. Ultimately, I was able to convince my groupmates that there were simply no viable alternatives to the hamburger menu that would allow us to keep our home page visible. We used the hamburger menu in the user testing, and encountered no significant issues nor complaints.

We learned that though certain common design options may seem clunky or outdated, it is not as easy to replace them as it may first appear. Being recognizable is the most crucial facet of a successful symbol, and since you are not your user, it’s not easy to create something intuitive to replace it.

Decision 5: Admission Prices

Two of the main pieces of information that my users had trouble finding when I conducted my initial website evaluations were the location of the museum’s hours and the ticket prices. They considered these to be a crucial piece of information, and were frustrated when it was hidden in a footer (in the case of the American Museum of Natural History) or only available once you’d committed to buying tickets. Therefore, I went into the design process with the belief that these pieces of information should be easily accessible.

My group’s initial idea was to simply include this information under the banner on the home page. The hours were easy to fit in, but after seeing how complex the ticket prices could be, we realized that they would take up too much room. I then had the idea to create a “Plan Your Visit” button, which would serve both as a call to action and an easy way to jump right to pertinent information. Clicking on the button would bring users right to the Plan Your Visit page, where the ticket prices were immediately visible. During our user testing, however, users did not follow this logic, and instead used the top-level navigation to find the information. We decided that the wording should more accurately reflect the landing page, rather than the section of the site that users would be on. After briefly trying “Get Tickets,” we settled on “Admission.” This allowed users to know exactly what they would see when clicking it and once they were there, they could find whatever else they needed to plan their trip (See Picture 10).

Picture 10: Desktop Home Page

We learned that while shortcuts can help direct users to highly requested sections of the site, the shortcuts shouldn’t be treated the same way as top-level navigation. The wording for these should be as specific as possible, otherwise the user still has to sift through the rest of the navigation anyway.

The Final Product

Our final design incorporated aspects from every decision we made, both those detailed above, and countless smaller ones made consciously or unconsciously.  Some of these decisions came from our past experiences, but in many cases we had to override these assumptions when our data and testing challenged our way of thinking. The lessons I learned were not just applicable to these specific situations, but to the design as a whole.

While the prototype is certainly not perfect, some of our decisions appear to be validated. Our final round of user testing saw no significant issues. Our top-level navigation options were exactly the same as one of the other groups’ results, which would suggest that our decisions are also supported by their research and user testing.

Kevin Cosenza

Information Architecture and Interaction Design: 643-02