10 Things I Learned From Taking Over 100 Usability Tests
After taking nearly 150 usability tests (I took them all on Usability Hub), I've come upon some interesting learnings on how people use usability tests today. For starters, take a look at what to avoid when creating your own usability tests. Jared Spool's article on Seven Common Usability Testing Mistakes is a good primer.
Jared's article was written in 2005. So what do people do with usability tests now in 2014? Have we learned to avoid the pitfalls that have been already documented?
1. People test color choices for their website
The goal of a usability test is to find the critical problems that prevent people from completing tasks. Unfortunately, this test doesn't clearly state a task that I need to complete. It asks what I think is nice for a background color.
This is more of a brand identity question that helps answer things like "Does the color match what the website is about? Does the color go with the emotion the product is trying to invoke?" I believe topics like this should be really defined by a creative director who oversees the brand identity or work closer with the client to determine if the colors represent the brand values.
2. People make tests difficult for users to complete.
This is a very straightforward click test to measure where I would click. However, since I can't easily read any of the selections, it's hard for me to correctly click where I want. I think if the screenshot was more zoomed in I'd be able to correctly provider the tester with the information they want. I know what I'm supposed to do, but there's just not any good way for me to see what I'm clicking. I can't help but feel like this:
3. People test which logos people prefer.
Is this logo presenting a critical problem internally? What does the business need to learn about the logo? I think the problem I have with the logo choices in usability tests is that I don't understand what learnings they could get to affect the outcome of the design. Here, I'm asked what I prefer, which I think is a highly subjective.
Does anyone remember the Gap logo fiasco? Logos have long term impact on a brand's identity. I think random usability testing on a logo presents risks because the results will be meaningless if the business or client does not know whether a logo is a good representation of their values.
For example, if I'm being shown an architectural firm here, I could care less which logo I pick because I may just prefer colors over large text. But the impact of my decision goes far into brand identity of this company even though I may never interact with them. I think that's a risk that's too much to put into the hands of testing with a random demographic. If a company cares about its brand identity, it should also take great care in choosing a logo among its partners, customers, and other stakeholders rather than letting the random demographic in Usability Hub determine it.
4. People test to crowdsource logo design from scratch
This one was surprising. I didn't expect to see a usability click test to select which design I liked better as a logo and whether the letters should be lower case or upper case.
As I mentioned in the previous learning, it's more appropriate to take these considerations in front of prospective customers or clients of the company, not random people. Come up with a few logo ideas and show them to a small representative group of potential customers to see how they react. This is more targeted way of getting a user's opinion of taste rather than sourcing logo design to a demographic of people that are random.
What you care about is how your potential clients or existing clients react to your logo. If you want to solicit logo feedback, ask the client first and then see what they response is and what their customers' response is.
5. People test what's best for the user, but I wonder about whether its best for the user AND the business
I've worked in customer support for over 4 years and have seen variations of this screen many times. In this case, I don't think a usability test is the right thing to do for measuring customer feedback about a transaction or store experience. That's because the critical business metrics shouldn't be determined by a usability test. What they should really be asking is why they want to show this screen at all.
For example, let's say the Yes/No screen wins the click test because it's simpler and quicker for the user to select. Now whenever the business needs to figure out what they need to improve on, all they can see is how many customers responded Yes or No to the survey with no other qualitative information. Yes/No doesn't help the business improve because it doesn't provide enough information on what it is that needs improvement.
6. People test options that are nearly identical
In the example below, I barely recognized what the difference was until I looked at it twice.
Can you see it?
The left Filter column has a grey fill color. In the right variation, the grey fill expands along the top of the right column. It's subtle, but I don't find the change "obvious" at all which is what the test was telling me to pick. I was expecting a whole different layout to make those interactions more obvious.
Tests like these often leave me frustrated when both choices are not obvious for the user and should call for a more dramatic change in design between choices.
7. People use usability tests when they should be using A/B tests
This click test only changes one element - the copy of the call-to-action button at the bottom of the area. I argue that this should be A/B tested to come up with even better results than a usability test. Why?
With an A/B test, you can test each variant's performance instead of waiting for a usability test to be completed. You can also test with real potential customers instead of random usability test takers like myself who may never see the product.
8. People don't set up usability tests correctly
First, framing the situation is really important so a user knows what the situation is. You can accomplish this with an "Imagine that you are [doing something] [in a location/state of mind/occasion]" statement.
Example: Imagine that you are shopping for a dress for a wedding.
Second, stating a task is also important so that the user knows what they should be doing. For a five second test, it's usually recalling what stood out or if they could tell what the company does. For a click test, it's a task related to what a designer thinks a person should click on.
Example: Could you tell what services or products this company offered?
Example: Click where you would find the the sale section
In the test above, there is no framing of a situation nor is there a task stated. It's really hard for a user to figure out what form is easier to see if we don't know what we're supposed to be doing in the first place.
Aside from the above test itself showing two of the same images (so there's nothing to really choose from), the setup of this test also failed to define what the test was even supposed to accomplish. This doesn't help the user or the business.
9. People don't select the right test to use
I took a five second test where I stared at a website, but the subsequent questions asked me where I would click.
A five second test helps you fine tune your designs by analyzing the most prominent elements of your design. It is also to test first impressions and how easy your design is to understand.
A click test is used for placement/layout and helps determine if people can do what you are asking them to do.
This test should have been separated into a five-second test and a click test. Because the question about where to click was lumped together with the five-second test, I wasn't able to accurately describe where to click because I couldn't see the image anymore nor did I remember where the element was to describe it thoroughly.
10. People love testing headline copy
I saw a lot of headline copy tests. More than I would have expected. They were also all five-second tests so I'm wondering if they were made by the same person. In five seconds, it was also hard to read all the headlines before the timer expired if there were more than two headlines.
If these were to be made into Google AdWords links, a better way to validate what works is to list them all and see what performs the best to generate clicks. Maximize for the metric of how many additional content pieces were read as a result of the link coming in. If this were being tested for a content site, I would assume an editor would the role of deciding which headline to run. If you want to learn how to write great headlines, head on over to Copyblogger.
11. (Bonus) People choose domain names with usability testing
I'm not sure how to explain this one, especially for something that sounds as important as a Belgian embassy. Please refer to your local government entities?
About one out of every three usability tests I took had some issues with it or questionable reasoning behind the test. I think we’ve learned quite a bit since Jared’s article in 2005, but successful usability testing seems to be still far from perfect. Make sure you know why you're testing and testing with the right audience. More sensitive items like logos should be put in front of a targeted demographic than done on a random demographic like Usability Hub. Setting up the right test is just as important as what you will do with the test results. Also, make sure the tasks that you are designing are in line with what you want to learn.
Usability tests are great for identifying problems. Use them to find existing or new problems with your product or service. Just make sure to test again with some solutions you come up with your team to see if you've solved the problem you discovered.
Have you seen done any usability tests yourself? What did you test? I'd love to hear them and what you learned.