Remote user testing has a reputation for being a quick and dirty evaluation method, but like other methods it has strengths, weaknesses, and best use cases. Remote user testing can be divided into two types: moderated and unmoderated. In unmoderated remote user testing, the evaluator and user are not in the same place, and the user evaluates the interface independently. Moderated remote user testing also involves displacement of evaluator and user, but the evaluator is in the “same ‘virtual’ space” as the user- whether via screen sharing, chat, or some other form (Schade).
Remote user testing can be significantly cheaper than face-to-face user testing. There can still be costs associated with purchasing software or incentivising users, but even these costs can be mediated with free options and recruiting services.
With remote user testing, recruiting either a greater diversity or specificity of users is easier. Often user testing sites will offer an option to customize the demographic of users to be sampled. The internet also gives the ability to recruit a greater diversity because of its connected nature. It is far easier to recruit users from India via the internet than to fly to India to recruit users face-to-face.
Evaluation can offer both qualitative and quantitative data. With technology that analyzes a user’s computer data and the user’s physiological responses, a variety of quantitative data can be obtained through remote user testing. Evaluators can also collect qualitative data through their own observations in moderated testing and self-reporting from users. Remote user testing does not yield only one type of data.
Results can be obtained quickly. As mentioned, the recruiting process is made easier, and with all of the data collected digitally, analysis can also be speeded up because it can be automated. Of course, qualitative data may still require a human evaluator, but initial results can be spit out by a computer that already has the raw data quickly.
The users will be captured in their own use contexts. As user testing has evolved, the importance of use context has become more and more apparent. With remote user testing, the user can test an interface within their normal environment: on their machine, among their other open programs, and in their space (work, home, where ever they would normally use the interface).
With unmoderated remote user testing, there is no support or help for the user if they get stuck. With no evaluator present – virtually or otherwise – the user may go down the wrong path and the data collected will not be useful to the evaluator.
Most remote user tests are kept short, which means the interface to be evaluated, or the section of it, needs to be small. Evaluating a full website is most likely not feasible with remote user testing.
Security of information shared over the internet can be a concern in remote user testing. Evaluators have less control over the security of the information about the interface they are sharing with the users when they are not in the same room.
Some remote user testing methods require special equipment or software that a user will not have. For example, not all PC laptops have webcams installed, which limits PC users from participating in moderated user tests with video. This can skew or limit results.
Because remote user testing relies on technology, from an internet connection to specialized software, the possibility of technical problems increases. With face-to-face user testing, even if one technological aspect of the test fails, for instance, the webcam, the evaluator can still record observations with a pencil and paper.
As with data-driven design decisions, quantitative data obtained from unmoderated testing does not explain why participants made a certain action, and with no evaluator present or in contact with the user, it is not possible to ask follow up questions (Barnum 2011). However, this weakness can also be a strength in that the sheer amount of quantitative data that can be obtained with unmoderated testing can impress managers who are bent on making design decisions with data.
Remote user testing is not inherently better or worse than face-to-face user tests, and the evaluator’s discretion should still be used in selecting a method of testing. We have a habit of becoming enamored of the newest technology, and while remote user testing is the best option in many cases, it still does not completely replace or replicate face-to-face user testing.
Barnum, C. M. (2011). Usability testing essentials: Ready, set…test! Burlington, MA: Morgan Kaufmann.
Bolt, N. (2010). “Pros and Cons of Remote Usability Testing.” Johnny Holland.
Keep It Usable. “What’s the real difference? Face-to-face versus Remote user testing.”
Schade, A. (2013). “Remote Usability Tests: Moderated and Unmoderated.” Nielsen Norman Group.
We Are Personal. “Remote Vs. Local User Testing.”
Sources to Help Select a Tool for Remote User Testing
Nielsen Norman Group. (2014). “Selecting an Online Tool for Unmoderated Remote User Testing.”
Bolt, N. (2010). “Quick and Dirty Remote User Testing.” A List Apart.