There are many differences that should to be taken into consideration when choosing whether to set up your test as a qualitative or quantitative study. Qualitative testing yields results that can answer questions related to “why is this happening?” Or “how did that interaction make you feel?”. Where the main goal of performing a quantitative usability test is to gain some metrics, whether it be the success of a certain task, or time taken to complete a task, to name only a few.
So when thinking broadly, the main difference between the two is getting the answer to two separate questions, “why?” and “how much?”.
This is especially true in the case of the quantitative usability test known as benchmarking. It follows the same methodology as a qualitative usability test. The focus, however, is placed more on metrics that can be gained, rather than insights into a consumers thinking. Users are asked to perform realistic tasks while the testers observe the time it takes to complete said tasks, as well as the success and failure rates of those tasks.
Mainly, benchmarking is used to chart a website or app’s progress over time! This can be the progress of a prototype’s iteration, different versions of an application, or it can be the progress of different sites running parallel to one another.
The benefits of benchmarking are not only in the information gained, but the way in which the information can be shared. Numbers and figures are concrete and hold a bit more weight than the insights gained from just qualitative testing alone. Also charts and direct comparisons can be more effective when trying to persuade higher ups in any one direction. The metrics are readily available at a glance, so the need to sit through users test videos is lessened, if only a bit.
“How is it done?”, I hear you ask. Well, it starts with a plan. This is vital, there should be no deviations from the plan once you have started the benchmarking path. So, full agreement on what you and your team are trying to achieve is paramount. Some of the main questions you have to ask: “Do we have the budget to keep up with this?”, “What are we measuring?”, “Is the insight gained worth it?”. Then you create the test with a script that you keep consistent. Goal oriented tasks that avoid over-explanation are used, much like they are in qualitative tests. The difference is the use of evaluative questions after each task, numerical values can be given to how “hard” or “easy” a task was. This rhythm of task and question needs to be maintained through every future test.
Demographics should be chosen. Any signifier with which you do your first test should remain constant through all future tests. Though they don’t need to be the same testers again and again, the demographic should be the same ( e.g collage students under 25). The sample size of quantitative usability tests are much higher than those of qualitative, because a higher sample size is required to bring any relevance to the insights gained. The frequency with which you should keep up with new tests is dependent on how much the product you are testing changes. If it is a concrete build less frequent tests can be taken.
There are glaring differences in price, time and size of user groups. Usability testing for quantitative data skews all the above categories to the higher end of the spectrum. They are usually on the more time intensive and money expensive side of the scale. These, of course, depend on the specific tests you run and whether or not you are doing them in person or remotely. Generally speaking though, quantitative research will always be more expensive due to the amount of participants needed to make the sample size viable.
The end result however, can be beneficial in showing how your UX has improved over time, as compared to competitors, and may be more persuasive when showing your findings to stakeholders in the company.