Our Streets, Our Stories: A Case Study

The Concept

01

Our Streets, Our Stories is an ongoing oral history project run by the Brooklyn Public Library. The program aims to collect the personal stories of community residents throughout Brooklyn neighborhoods through recorded interviews.

As an Our Streets, Our Stories volunteer responsible for conducting interviews, I initially set out to design a web-based solution for making the audio recording process easier for volunteers such as myself. As I started the design process, I considered the following three objectives:

  1. Create an easier way to record and upload audio content and associated digital media, such as family photographs, from volunteers’ own mobile devices.
  2. Improve site visitors’ access to the program’s rich trove of digital content by designing a new website with enhanced searching, browsing and discoverability.
  3. Create a platform other libraries and organizations can use for oral history projects in their own communities.

Understanding Users

05

Three personas: A volunteer persona, a site visitor persona, and a librarian/administrator persona.

My first task was to understand the site’s likely users. I determined three probable categories of users: 1) volunteers responsible for recording interviews; 2) site visitors interested in accessing the program’s digital content; and 3) librarians and administrators.

I conducted user research in the form of an interview and a questionnaire and drew from my own experience as a volunteer to create personas.

Key insights at this stage:

  • My likely visitors have entirely different goals in visiting the site.
  • Users’ age range varies widely, from 20-something to retirement age and beyond.
  • Given the above, and to make the project feasible, I decided at this point to focus design efforts on the volunteer experience only.

Refining the Concept

06

Next deliverables: Diagramming the volunteer experience with an experience map and a flow diagram.

Next, I created a series of deliverables to map out the volunteer experience in using the site. My experience map diagrammed the volunteer journey at a high level; a flow diagram defined a specific set of tasks and accompanying decision points I thought users would follow; and a storyboard told the volunteer’s story in the form of hand-drawn comic panels. At this stage, I also started sketching concepts for the site’s screens across desktop and mobile in the form of paper prototypes.

Key insights at this stage:

  • In my initial sketches, I leaned towards creating a structured, stepped experience to guide volunteers through the interview process. The screens resembled a checkout wizard, with explicitly numbered and sequenced steps.
  • My early user testers felt the number of steps or screens in my paper prototype didn’t reflect the actual complexity of the tasks at hand. I realized I needed to simplify, both in terms of the task complexity and the terminology I was experimenting with.

Further Refining the Concept

07

My final high fidelity digital prototypes: mobile version on the left, desktop version on the right.

In this stage, I created a first set of digital wireframes; went back to the drawing board to sketch additional paper prototypes; and further refined and finalized my wireframes to create a high-fidelity digital prototype. I also created a visual design scheme for the site by designing a site logo, creating page layouts, and choosing fonts and colors for text and navigation elements.

Key insights at this stage:

  • As I moved from paper sketches to digital, I decided my initial “checkout wizard” approach was too overly structured. After some iterating, I eventually switched to a “content studio” metaphor to give volunteers freedom to go out of sequence while conducting interviews (while still providing structure to the experience as well as a recommended sequence).
  • I realized I had to design both a public facing “front” to the site as well as a password-protected “back” side. I came up with a name for the password-protected area: Recording Studio.  And I created a logo and distinct design scheme for it as well.
  • I considered how layouts and navigation should differ across the site’s desktop and mobile versions. For example, the desktop version has a top navigation bar with buttons, while in the mobile version, the navigation is condensed into a hamburger menu. I used lightbox effects in the desktop version to focus attention on the smaller dialog windows but this wasn’t necessary in the mobile.

Final high fidelity prototypes: 

Desktop version:

https://www.justinmind.com/usernote/tests/20175004/20178596/20178598/index.html

Mobile version:

https://www.justinmind.com/usernote/tests/20175004/20178596/20179150/index.html

 

On Customer Experience and User Experience: A Q&A with Chris Brown

I spoke with Chris Brown, who manages research for the Customer Experience team at Wiley, the company I work for. Our conversation covered Chris’s professional experience and role at Wiley. We talked about the meaning of the term “customer experience” and how it differs from “user experience.” The terms are confusing to many of us: for example, at Wiley, UX research methods and deliverables such as personas and journey mapping fall under the CX team more so than the UX team.

My program at Pratt is about user experience. What is customer experience?

Well, UX folks are concerned with an element of a customer’s experience [and not the whole]. They’ll look at an interface, what’s working and not working, what the user is trying to accomplish. They’ll identify use cases and gather requirements and things of that nature that will make their way into a product roadmap. They’ll test software and do all of that kind of thing… The word “usability” gets at the difference. How is the navigation? Does it work? Is it efficient?

Customer experience is a broader thing. We’re looking at customer groups vs. individual user experiences. And it’s looking at the customer’s journey along a path from discovery of a particular product or offering, looking at the customer’s needs and aspirations, and the pain points, attitudes, and behaviors in the context of a program or something broader—one of the things we try to do is to narrow it down… So if you’re a customer you’ve got a particular set of needs, aspirations, and attitudes about what’s available: discovery, evaluations, and purchasing.

And this is from a marketing perspective. In terms of a program, we try to understand what the experience is like. We do research at the front end to understand customers. That body of knowledge is applied in different ways, in terms of developing new programs and revising existing programs in an ongoing way.

We look at the customer’s journey at different touchpoints. For example, beginning of term surveys go out at the beginning of the semester. End of term surveys are at the end of the semester. This roundly is about the voice of the customer.

[On the customer experience team]… we’re looking at customer groups vs. individual user experiences.

How are personas used at Wiley?

It’s one of the tools we use. Foundationally, it’s about insight. It’s about understanding your customers so that the actions you take are customer focused. You need to really dive in deep and develop your personas.

The value is in the process and the actual doing. To develop personas you do it collaboratively with the teams here who are developing the products and programs. Wiley’s education group is doing it with instructors and students, and other groups are doing it with librarians, journal authors, and elsewhere.

A personal is a deep description of a type of person. It has elements of their journey and it has other elements.

Wiely_Personas

Personas are created for students, instructors, and journal authors. 

How are journey maps used at Wiley? What are they? Are they actual deliverables?  

Journey maps are used to make sure you’re getting the right questions. The value is in the application of the journey map. You can call the thing a deliverable but that doesn’t mean it’s actually done. They’re representations, emblematic of customer types and discrete customer needs, and so you’d apply them when you’re thinking about new programs and services you’ll develop for your market.  For product designers, they help make the customers come alive and it’s a useful tool for them.

[Personas and journey maps] can be used to focus a marketing plan as well.  

But marketers can use it as well—they can be used to focus a marketing communication plan as well. It helps sales and marketing teams communicate with certain segments of customers.

You have to be careful with personas and journey maps. People are complex. They’re not the same thing all the time. Good personas should be recognizable and should be easy to differentiate from one another. That’s not always the case when you read some of them.

journey_mapping

Sample journey mapping, includes “stages,” “thinking,” and “feeling” for this instructor persona. 

Can you tell me a little about your career path?

I started working in market research for a supplier, doing a bunch of different things, more consumer oriented. Then I started working for a publisher in research and management roles. A lot of it had to with product development and marketing, some with academic research. My previous job was heading research for the Pearson Foundation. We looked at various things like innovative assessment, language learning… the intersection of technology and learning, AI stuff related to how languages can be taught in an online environment. This research informed product development at Pearson. How research gets translated into product development is a real challenge.

 

A Discussion with the Usability Research Team at Wiley

I met with Vikki and Akiko, usability researchers at Wiley Publishing, to talk about usability and its role at Wiley. Wiley is an educational content and software development company (although the word “publishing” is still part of the company name, Wiley is really much more technology-driven than this word implies).

As usability researchers, Vikki and Akiko manage activities ranging from more upstream research like ideation, concept validation, and discovery, to turning a critical eye on existing products and services to help business units make cases for where resources should be spent. They’ve conducted usability tests on diverse products like Wiley’s online bookstore, a reading platform for academic articles, an educational video platform (Wiley’s version of Lynda.com), and a homework system for college students.

Vikki and Akiko work for the “Technology, Design, and Innovation” group, a small team that exists outside of Wiley’s three main divisions, which are the college division (where I work), an academic journals division, and a professional/ trade publishing division. I diagrammed this organizational structure (see below) because understanding the usability team’s position as “inside outsiders” helps explain positive aspects as well as limitations of their role.

wiley

 

  • Positive aspects: When usability research is conducted at Wiley, the fact that they don’t report to the product managers who have asked them to test their products probably means it’s easier to generated unbiased reports. From their perspective outside of reporting structures, they’re not in the position of telling the boss that his or her idea is a bad one.
  • Potential limitations: They are engaged on a project basis, only when resources are available, and this means that some important products inevitably don’t get tested since it’s up to product teams to decide what does and doesn’t. After testing is completed and they deliver their write-ups, there’s no guarantee that product managers or designers will follow through on any of their recommendations. To combat this, they said they always invite product teams to sit in on testing sessions—nothing is more convincing than hearing bad or good news straight from users.

I asked if they had any general insights or advice on their role, and here are some highlights from our discussion:

  • On the difference between market research and usability research, they said, “Market research brings the horse to water, but UX research figures out the way to make the horse drink it.” Wiley is a company that historically sees research as a marketing opportunity—it’s about getting close to customers, not to users. The usability team is trying to change this perception, but it’s tricky when the company is UX-immature, as Wiley is.
  • On how much usability testing is done, they said, “Usability is what we sometimes hope we don’t have to do (or do less of), because it means we didn’t do the work (research) up front.” They talked about the importance of (and the challenges around) getting involved in product development upstream rather than at the end of the process.
  • On usability reports, they said the detailed reports, often in PPT format with a lot of visuals, are a way to make the whole room share a perspective and avoid endless debating or always deferring to “hippo”, the highest paid person’s opinion.

Finally, I asked if they could point to a specific example of a successful usability intervention that resulted in big improvements to an interface. We looked at the Wiley Online Library reading platform, which can be accessed here: http://onlinelibrary.wiley.com/enhanced/doi/10.1111/nph.12453/# . They explained some of the challenges of designing an online HTML reading experience that aimed to match the ease and familiarity of reading printouts or PDFs, and also how “PDF grabbing” as a behavior pattern among academic researchers was not going away any time soon. In this case, usability testing led to specific innovations in the reading platform’s design, such as a newly-added “fly out tray” for references that follows users up and down the page in an unobtrusive way, avoiding the need for tedious scrolls through multiple screens to reach references at the ends of articles.

Epson LabelWorks Printer (Bad Design)

The Epson LabelWorks label printer is a device for typing and printing label stickers like the kind you’d stick on file cabinets or mail boxes in apartment buildings.

2

The device’s design fails mainly in that it presents a system image that is both difficult to understand and far more complicated than the average user likely expects. I bought the label printer because I need to print labels for mailboxes. I’ve used it probably four times to type and print the following label stickers:

  • GREG’S MAIL
  • IVAN’S MAIL
  • BUZZER IS BROKEN CALL 917733XXXX
  • FED EX PLEASE LEAVE PACKAGES AT CLEANERS ON CORNER

As a user, I’d expect the label printer to support three activities: 1) typing my message, 2) printing my message onto a label sticker, and 3) getting the label sticker out of the device upon completion of printing. The second and third activities are fairly clear: a prominent, color-coded print button allows for printing, and a similarly prominent cut button allows for cutting the label sticker and detaching the new label from the device.

The design, however, works overtime to make the first activity – typing a message—difficult. While the device has a typical keyboard, it also includes about 16 additional buttons marked with unintelligible (at least to me) symbols and abbreviations denoting unclear operations that appear to stray far from the task at hand. Some examples of unclear/idiosyncratic language or symbols found throughout the interface:

3

Norman might call this a gulf of execution – there’s a huge disconnect between my image of what the device should do (it should make labels) and should look like (it should probably look like a keyboard) and the system image presented by the design (is it a scientific calculator or what?). I imagine most of these buttons have to do with word processing functions – making letters all caps, changing fonts, setting font sizes, etc. Word processing is not uncommon; in fact it’s an everyday activity for many of us. Surely a design can accommodate these features in more conventional ways.

I found it interesting that the back panel of the device includes “quick tips,” essentially a glossary explaining what operations the symbols denote and how to perform them.

4

If the designers felt the need to incorporate an instructional panel like this, surely this should have raised flags that the design wasn’t entirely successful. Norman even mentions in his book how good design obviates the need for instruction; the design itself should include all the clues necessary for users to understand its operation.

Doorbot (Good Design)

The doorbot is a both a physical device and an accompanying software application for mobile devices. In this post, I’m focusing on the physical device component as an example of good design. We bought the doorbot for our building when our buzzer system broke. The physical device mounts to the front of the building next to the door and looks like this:

1

The doorbot system works as follows:

  1. A visitor arrives at the resident’s front door and pushes the doorbell button on the wall-mounted device, which sends a push notification to the resident’s mobile phone via the doorbot mobile app.
  2. The resident opens the doorbot app, engaging the video camera on the wall-mounted device as well as the speakers and microphones on both the resident’s mobile phone and the wall-mounted device.
  3. The two users then engage in a video conversation via mobile phone and wall-mounted device.

The doorbot system is NOT a buzzer—it doesn’t allow a resident to unlock the door. And it’s not a traditional doorbell in that the ring function only takes place on the resident’s mobile phone in the form of a push notification. Also, the video function is strictly one-way—only the resident can see the visitor via the device’s camera and not the other way around.

Good design according to Norman might be summarized as “easy to figure out what to do” and “easy to tell what is going on” and I think the wall-mounted device succeeds in both of these areas. Its interface is designed for familiarity. There are three components to the interface: a prominent button, which evokes a doorbell; a prominent video camera, which evokes a traditional video intercom system; and a prominent microphone and speaker, which both in their placement at the top and bottom of the device and in their “parallel lines” design follow well-known design conventions of telephone headsets and mobile phones.

Just looking at the device, a user should know what to do and how it works. Norman would say this sense of familiarity is a function of a design that successfully harmonizes its system image (a doorbell meets video intercom system) with the user’s prior knowledge or mental model how doorbells and video intercom systems work as well as what they look like. It minimizes the gulf of execution by making the system image so visible and easy-to-understand and by constraining action to the pushing of a button, as the button is the only component of the interface that’s interactive.

At the same time, the design minimizes the gulf of evaluation by providing clear feedback to let the user know what is going on. It does this in the form of both audio and visual feedback. When the visitor presses the doorbell button, the device makes a ringing sound like a doorbell and the light ring surrounding the button flashes, indicating that the resident is being called. When the resident picks up, the light ring switches from flashing to a solid blue light, indicating the video conference has been engaged. And when the resident hangs up, the blue light turns off, indicating the conference has ended.

Norman mentions in his book how good design obviates the need for labels or instruction manuals and I think the doorbot, with no visible labels or explanatory text at all incorporated into its design, is a great example of this.