Usability testing makes the difference between design thinking (designing for the user) and obsessing over features.
In this piece, we’ll outline the 6 steps to running an insightful usability test:
- Define Goals
- Choose the Test
- Create User Tasks
- Write a Research Plan
- Conduct the Test
- Draft Up a Quick Report
Let’s get started.
1. Define Goals
The first step to any successful usability test is defining your goals. This could be broad, such as:
Which checkout methods are most intuitive to our users?
Or specific, such as:
Which form design works best for increasing e-commerce purchases?
Naturally, you’ll have a lot of questions about your product, and this curiosity is good. However, remember to limit each test to only the most relevant issue at the moment. Each test should have a central focus for the most accurate results — the more objectives you test at once, the more room for error.
As David Sherman mentions in his article on usability testing, the answers to these questions will be your test’s hypothesis.
You can generate hypotheses simply by setting aside time, for your and your team, to try to answer the goal questions on your own.
2. Choose the Right Test
It’s not about knowing which tests work and which don’t, it’s about knowing which will work for a specific need.
In the free Guide to Usability Testing, we divide the tests into four categories based on Christian Rohrer’s fantastic article:
- Scripted — These tests analyze the user’s interaction with the product based on set instructions, targeting more specific goals and individual elements. (tree testing, hallway usability tests, benchmark testing)
- Decontextualized — Ideal for preliminary user testing and persona research, these tests don’t necessarily involve the product, but analyze more generalized and theoretical topics, targeting idea generation and broad opinions. (user interviews, surveys, card sorting)
- Natural (or near-natural) — By analyzing the user in their own environment, these tests examine how users behave and pinpoint their feelings with accuracy, at the cost of control. (field and diary studies, A/B testing, first click testing, beta testing)
- Hybrid — These experimental tests forego traditional methods to take an unparalleled look at the user’s mentality. (participatory design, quick exposure memory testing, adjective cards)
Once you determine the type of usability test(s) to run, you should send out a descriptive announcement to give your team a heads up. It’s even more helpful, in fact, if you summarize your tactics with a quick planning document.
3. Create Your User Tasks
Everything you present to your users during the test — both the content of the question/task, as well as the phrasing — impacts how they respond.
Usability tasks are either open or closed, and your tests should incorporate a healthy mixture of both:
- Closed — A closed task offers little room for interpretation — the user is given a question with clearly defined success or failure (“Find a venue that can seat up to 12 people.”). These produce quantitative and accurate results.
- Open — By contrast, an open question can be completed in several ways. These are “sandbox” style tasks (“Your friends are talking about Optimal Workshop, but you’ve never used it before. Find out how it works.”) These produce qualitative and sometimes unexpected results.
Source: “A Five-Step Process for Conducting User Research.” David Sherwin. Smashing Magazine.
Read Tingting Zhao’s piece for more advice on optimizing tasks.
As for the wording, be careful to avoid bias. Just one wrong word can skew results.
For example, if you want to find the most natural ways in which users browse your online shop, writing a task like “It’s 10 days before Christmas and you need to search for a gift for your mother,” might lead the user to use the search functional, as opposed to their normal method of window clicking.
4. Write a Research Plan Document
Modified from Tomer Sharon’s One-Pager (fantastically helpful yet lightweight), the research plan document we use at UXPin is a formalized announcement with all the necessary details of the testing.
You want to hand your team a slim document around one page to encourage them to actually read it.
While keeping things brief, you’ll want to cover at least these 7 sections:
- Background — In a single paragraph, describe the reasons and events leading to the research.
- Goals — In a sentence or two (or bullets), summarize what the study hopes to accomplish. Phrase the goals objectively and concisely. Instead of “Test how users like our new checkout process,” write “Test how the new checkout process affects conversions for first-time users.”
- Questions — List out around 5-7 questions you’d like the study to answer
- Tactics — Where, when, and how the test will be conducted. Explain why you’ve chosen this particular test.
- Participants — Describe the type of user you are studying, including their behavioral characteristics. You could even attach personas (or link to them) for more information.
- Timeline — The dates for when recruitment starts, when the tests will be expected to take place, and when the results will be ready.
- Test Script — If your script is ready, include it here.
Check out Sharon’s sample One-Pager to see how it should look.
Encourage your team-members to give suggestions or advice so that the test results are helpful to everyone. Find out the questions that they want answered as well.