Usability testing makes the difference between design thinking (designing for the user) and obsessing over features.
In this piece, we’ll outline the 6 steps to running an insightful usability test:
Choose the Test
Create User Tasks
Write a Research Plan
Conduct the Test
Draft Up a Quick Report
Let’s get started.
1. Define Goals
The first step to any successful usability test is defining your goals. This could be broad, such as:
Which checkout methods are most intuitive to our users?
Or specific, such as:
Which form design works best for increasing e-commerce purchases?
Naturally, you’ll have a lot of questions about your product, and this curiosity is good. However, remember to limit each test to only the most relevant issue at the moment. Each test should have a central focus for the most accurate results — the more objectives you test at once, the more room for error.
Scripted — These tests analyze the user’s interaction with the product based on set instructions, targeting more specific goals and individual elements. (tree testing, hallway usability tests, benchmark testing)
Decontextualized — Ideal for preliminary user testing and persona research, these tests don’t necessarily involve the product, but analyze more generalized and theoretical topics, targeting idea generation and broad opinions. (user interviews, surveys, card sorting)
Natural (or near-natural) — By analyzing the user in their own environment, these tests examine how users behave and pinpoint their feelings with accuracy, at the cost of control. (field and diary studies, A/B testing, first click testing, beta testing)
Hybrid — These experimental tests forego traditional methods to take an unparalleled look at the user’s mentality. (participatory design, quick exposure memory testing, adjective cards)
Once you determine the type of usability test(s) to run, you should send out a descriptive announcement to give your team a heads up. It’s even more helpful, in fact, if you summarize your tactics with a quick planning document.
3. Create Your User Tasks
Everything you present to your users during the test — both the content of the question/task, as well as the phrasing — impacts how they respond.
Usability tasks are either open or closed, and your tests should incorporate a healthy mixture of both:
Closed — A closed task offers little room for interpretation — the user is given a question with clearly defined success or failure (“Find a venue that can seat up to 12 people.”). These produce quantitative and accurate results.
Open — By contrast, an open question can be completed in several ways. These are “sandbox” style tasks (“Your friends are talking about Optimal Workshop, but you’ve never used it before. Find out how it works.”) These produce qualitative and sometimes unexpected results.
As for the wording, be careful to avoid bias. Just one wrong word can skew results.
For example, if you want to find the most natural ways in which users browse your online shop, writing a task like “It’s 10 days before Christmas and you need to search for a gift for your mother,” might lead the user to use the search functional, as opposed to their normal method of window clicking.
4. Write a Research Plan Document
Modified from Tomer Sharon’s One-Pager (fantastically helpful yet lightweight), the research plan document we use at UXPin is a formalized announcement with all the necessary details of the testing.
You want to hand your team a slim document around one page to encourage them to actually read it.
While keeping things brief, you’ll want to cover at least these 7 sections:
Background — In a single paragraph, describe the reasons and events leading to the research.
Goals — In a sentence or two (or bullets), summarize what the study hopes to accomplish. Phrase the goals objectively and concisely. Instead of “Test how users like our new checkout process,” write “Test how the new checkout process affects conversions for first-time users.”
Questions — List out around 5-7 questions you’d like the study to answer
Tactics — Where, when, and how the test will be conducted. Explain why you’ve chosen this particular test.
Participants — Describe the type of user you are studying, including their behavioral characteristics. You could even attach personas (or link to them) for more information.
Timeline — The dates for when recruitment starts, when the tests will be expected to take place, and when the results will be ready.
Test Script — If your script is ready, include it here.
As for your role during the actual test, sometimes you must make the choice between being present (moderated) or allowing the user to work on their own (unmoderated). Additionally, you can also choose to conduct your test on-location or remotely.
Unmoderated — Unmoderated tests are cheaper, faster, and generally easier to recruit and schedule. They also remove the influence of a moderator, leading to more natural and less biased results. On the downside, there is less opportunity for follow-up questions or supporting users who go astray during tests.
Moderated — While costlier and requiring more effort to organize, moderated tests allow you to “lead” the user, for better or worse. Moderated tests are recommended for rougher prototypes (higher risk of bugs and usability issues) or incredibly complex prototypes (users might need some clarification).
While every test has different qualities and best practices, the following advice works across the board:
Make users comfortable — Remind them you are testing the product, not their capabilities. A test script helps ensure you hit upon a few reassuring points in the beginning of each test.
Don’t interfere — This avoids bias, and may reveal insights into user behavior you hadn’t predicted. The best insights usually come from when a user isn’t engaging with the product the way it’s designed. Pay attention to workarounds and let them inspire feature improvement.
Record the session — This makes a solid reference point for later, when interpreting the results. If you’re running the test through UXPin, you can record data like facial reactions, clicks, and all audio.
Collaborate — Tomer Sharon suggests creating a Rainbow Spreadsheet to allow everyone to record their own interpretations for quick comparisons later. We used his spreadsheet during our Yelp redesign exercise and found it was very helpful for summarizing results for designers and stakeholders.
The usability report is the way to share the results with the team, so that everyone’s on the same page.
To best organize and make the results readily available, we suggest creating a cloud folder with universal access.
As you write the report, keep the following tips in mind:
Avoid vagueness – Mentioning that “Users couldn’t buy the right product” isn’t very helpful since multiple factors might be involved. Perhaps the checkout process was difficult, or the product listings were hard to browse. Explain the root of each issue from an interaction design and visual design perspective (e.g. confusing layouts, a checkout process with too many steps, etc.).
Prioritize issues– Regardless of how many issues you find, people must know what’s most important. We recommend categorizing the report (e.g. Navigation Issues, Layout Issues, etc.) and then adding color tags depending on severity (Low/Medium/High). List every single issue, but don’t blow any out of proportion. For example, don’t say that a red CTA button lead to poor conversion if the steps of the checkout process don’t make sense.
Include recommendations – Don’t include any hi-fi prototypes or mockups in the usability report, but definitely suggest a few improvements. To supplement written suggestions, our own UX Researcher Ben Kim also links to lo-fi wireframes or prototypes in a UXPin project dedicated to usability testing.
The usability report should be a folder, not a single file. Don’t forget to include things like:
Formal usability report
Supporting charts, graphs, and figures
Previous testing documentation (i.e., the list of questions the user was asked)
Videos or audio tracks of the test (which is why it’s good to record sessions)
The documentation is just the starting point. Schedule a follow-up meeting with the team to review the usability report and relevant data, discussing issues and the outlined recommendations.
Don’t wait until the end of the project to conduct your usability testing. Once you have a lo-fi prototype, start testing. The data is less about validation and more about inspiration: test early, and test often, so you can actually put the results to use before it’s too late.
To get started on your next usability test, download the free Usability Testing Kit created by our CEO Marcin Treder. The kit includes 5 templates for planning, running, and documenting your usability test.