In the service of user experience, evaluation is many things. It can measure identifiable qualities of a system such as efficiency or ease of use. It can solicit emotional responses directly from the user. It checks users’ models of the system against the designers’. And, as we are learning, evaluation does all this in the service of producing documentable results that inform the design process. It is a tool for the rigorous and scientific documentation of user needs and inputs. Over time, the tool has evolved with our understanding of technologies, changing to reflect our ideas of what technologies should be. We see this in the structure of class: we started with inspection methods, which can only hope to approximate user concerns, and are moving through to user testing, which gets at the qualitative issues of design through established social science research methods.
Even though evaluation has grown to encompass broader concerns, it still ultimately views the user as the subject. It rests on the researcher to analyze the findings. Interviews are taped and reviewed. Diaries entries are coded and sorted. The meaningfulness of the results is determined by the report delivered. In the pursuit of rigor, the user has in no small part been left out of the equation. The problem with this is, as stated by Marcel LaFlamme (2007, p. 58), that users “are complex and unpredictable and have very real opinions about the conclusions that researchers draw about their lives.”
This fact is especially true in the context of libraries and museums, where user bases are heterogeneous. (Especially across city-wide urban library systems.) Different users will have vastly different needs. Traditional methods of evaluation try and mitigate this factor through careful sample selection. However, these differing needs are not simply tied to demographic data. Instead, users’ needs are reflections of the different social and cultural environments that engender them. No sample will every truly measure these intricacies (Williment, 2011). Moreover, as
users may not be able to truly communicate their needs to researchers.A more inclusive method of evaluation and design is called co-design. In co-design, the user(s), researcher(s), and designer(s) work collaboratively throughout the project. While the methods may vary (see: UX Magazine, Johnny Holland, and MakeTools for examples), they all involve the redefinition of traditional evaluation roles. The user in co-design is invited to become a specialist in their own localized knowledge. They participate in the design process using tools developed by the researcher and designer to reflect the scope and properties of the project. The designer uses their expertise and professional abilities to springboard off of the ideas generated by the user, while the researcher facilitates the discussion and explores the underlying whys and hows exposed during the activities (Sanders & Stappers, 2008). In turn, the user helps determine the criteria by which the project should be judged.
Through direct engagement with users, researchers and designers gain a more complete understanding of who the project is for and how they interact with technology. At the same time, the user is given a stake in the project, and the final product reflects their experience.
References:
Hagan, P. & Rowland, N. (2011, Nov. 18). Enabling Codesign. Retrieved from http://johnnyholland.org/2011/11/enabling-codesign/
LaFlamme, M. A. (2007). Towards a progressive discourse on community needs assessment: Perspectives from collaborative ethnography and action research. Progressive Librarian, 29, 55-62.
Sanders, E. B. N. & Stappers, P. J. (2008). Co-creation and the new landscapes of design. Co-design, 4(1), 5-18.
Williment, K. W. (2009). It takes a community to create a library. Partnership: the Canadian Journal of Library and Information Practice and Research, 4(1).