Post Image

Testing & Redesigning Yelp: User Research (from upcoming e-book)

by
Jerry Cao
Jerry Cao

    In our second sneak peek, we summarized the process of breaking down Yelp’s business model and deciding the right types of user tests.

    Now, let’s take a look at the actual user research. To get qualitative feedback on the Yelp site, we chose remote unmoderated testing since it allows for simultaneous testing. For quantitative data on the Yelp site, we chose closed card sorting and click testing to show us how users prioritize content. We also gathered quantitative data on the Yelp support site by conducting a tree test and open card sort to show gaps in the information architecture.

    The entire process of user testing and redesign along with screenshots of the new Yelp site will all be included in our upcoming e-book. Below, you can get an overview of the user research process.

    Getting Qualitative Data

    The process of our unmoderated user testing involved two steps: choosing user demographics and deciding our test objectives/user tasks.

    1. Choosing our users

    A user test isn’t very helpful without users, so our first step was deciding the criteria for recruiting test participants.

    Because Yelp has such a large existing user base (138 million monthly unique visitors, according to Yelp’s Q2 2014 report), we needed to make sure that our redesigned website was still usable by current active users. We decided the semi-frequent users would be our goal since they have a more middle-of-the-road experience with Yelp. Because Yelp users come from all backgrounds, we decided that age, income, gender, and Internet experience would not be filtering criteria. We also followed notable user researcher Jakob Nielsen’s advice to  test with 5 users since we wanted to stay lean and mean in our design sprint.

    Since some of our tasks required users to hold an account, we needed to split our group of 5 users into 2 segments: three users who had Yelp accounts and two users who did not. We imposed strict requirements on Yelp usage for those who had accounts to eliminate the likelihood that they were power users.

    2. Deciding objectives & user tasks

    We wanted to learn how semi-frequent Yelp users complete very common tasks (to learn which features are most important) and at least one uncommon task (to gauge if they can learn how to use a more advanced feature).

    For the common tasks, we observed how all test participants:

    • Found a business based on specific requirements
    • Found a business without being given many requirements
    • Found a specific location to learn specific information

    For the uncommon tasks, we assigned a different task to each group. Users with existing Yelp accounts (Group 1) were asked to save businesses for later reference — this would let us test the functionality of Yelp’s Bookmark and List features, which we had heard complaints about. Users without Yelp accounts (Group 2) were asked to find an event — this would tell us if they would search or browse the site and how they would decide which event to attend.

    Getting Quantitative Data

    Our process here involved open card sorting, closed card sorting, tree testing, and click testing. For all four types of testing, we did not filter by age, gender, income or web experience (same parameters as our unmoderated user testing). However, since our tests were more quantitative in nature, we actually needed a certain degree of statistical significance. So unlike our user testing, we decided to test 40 users instead of 5. Since semi-frequent users are likely to require assistance when using the Yelp site, we also tested the support site.

    1. Closed Card Sorting

    As discussed in our Product Definition post, closed card sorting allows users to organize information based on existing categories. This helps tell us  the effectiveness of the existing content structure.

    Testing & Redesigning Yelp

    i. Choosing Our Users

    We gave all candidates a pre-test questionnaire which asked them about how often they use Yelp (or similar sites), their likelihood of using search filters, and the average budget of each restaurant meal.

    ii. Deciding objectives & user tasks

    Our closed card sort had three simple objectives:

    • Determine how common people use search filters on Yelp (or a similar site)
    • Determine which filters are most important
    • Determine which filters are least important

    To accomplish our objectives, each card represented a search filter. In total, we had 47 cards representing all of Yelp’s 47 search filters. We then asked participants to sort them into categories of importance: very important, somewhat important, not important, and unsure.

    2. Click Testing

    Click testing (also known as first-click tests) shows which part of the interface is most important to users upon first impression. It essentially shows which elements of the navigation are crucial, and which are confusing.

    Testing & Redesigning YElp

    i. Choosing Our Users

    We gave all candidates a pre-test questionnaire asking them about how often they use Yelp (or similar sites) and their likelihood and frequency of writing Yelp reviews.

    ii. Deciding objectives & user tasks

    The click test had two objectives:

    • Determine how many people immediately use the search bar on Yelp
    • Determine if the navigation labels are clear

    To fulfill the objectives, we asked users to accomplish certain tasks based on screenshots and recorded the results in a heatmap (as you can see above).

    3. Tree Testing

    The first test that we ran on the support site, tree testing shows us how users interact with the navigation in the absence of a search function.

    Testing & Redesigning Yelp

    i. Choosing Our Users

    We asked all candidates how often they access support material on Yelp (or a similar website).

    ii. Deciding objectives & user tasks

    Companies can neglect their support sites over time, becoming bloated and disorganized as unconnected people add content at the speed of thought. As a result, the information architecture can lose functionality over time. Therefore, our objectives were:

    • Test clarity and effectiveness of the Yelp support site information architecture
    • Reveal which parts of the support site navigation are confusing

    To accomplish the above objectives, we uploaded the entire site map of Yelp’s support site to form a “tree” (as you can see above). We then wrote user tasks and asked the participants to click through the tree to identify where they think they could complete the tasks.

    4. Open Card Sorting

    Open card sorting was the second test we ran on the support site since we now wanted to see how users could categorize information without an existing information architecture. This would give us deeper insight into their thinking processes and help us work backwards from the user.

    Testing & Redesigning Yelp

    i. Choosing Our Users

    Like the previous tree test, we simply asked our candidates how often they’ve used the support site on Yelp (or a similar site).

    ii. Deciding objectives & user tasks

    Our objectives were quite simple and similar to tree testing with one exception — showing us new information architectures. We wanted to:

    • Figure out how users logically categorize content on Yelp’s Support Site
    • Determine if the current IA of the Yelp support site is effective
    • Get insight into better IAs based on how users categorize information

    For the open card sort, we created cards that represented the title of each page on the support site. Each card was worded as a question, eg: “Can I sue Yelp for a bad review?”. Users were then asked to sort all the cards in whatever categories made the most sense.

    Users First, Design Second

    The goal of all of the above types of testing was to refocus the design on the user. After all, if you’re not designing for users, then you’re really just designing for yourself.

    To see how the results of these 5 tests influenced the Yelp redesign, sign up for an early copy of the upcoming e-book.

    User Testing & Redesigning Yelp e-book

    Jerry Cao

    by Jerry Cao

    Jerry Cao is a content strategist at UXPin where he gets to put his overly active imagination to paper every day. In a past life, he developed content strategies for clients at Brafton and worked in traditional advertising at DDB San Francisco. In his spare time he enjoys playing electric guitar, watching foreign horror films, and expanding his knowledge of random facts. Follow him on Twitter.

    Still hungry for the design?

    UXPin is a product design platform used by the best designers on the planet. Let your team easily design, collaborate, and present from low-fidelity wireframes to fully-interactive prototypes.

    Start your free trial

    These e-Books might interest you