Friday, April 15, 2011

User Research You Should Be Doing (but probably aren't)

Startups know they should get out of the building and talk to their customers, but sometimes they’re a little too literal about it. There are tons of ways to get great information from your customers. The trick is knowing which technique answers the questions you have right now.

Sure, you’re doing usability tests and trying to have customer development interviews, but here are a few slightly unusual qualitative user research techniques you should be employing but probably aren’t.

Competitor Usability Testing

Have you ever considered running a user test on a competitor’s site?

This one’s fun because it feels a little sneaky. It also gets you a tremendous amount of great information, since chances are somebody is already making mistakes that you don’t have to make.

For example, when one of my clients, Crave, wanted to build a marketplace for buying and selling collectibles, we spent time watching people using other shopping and selling sites. We learned what people loved and hated about the products they were already using, so we could create a product that incorporated only the good bits.

The result was a buying and selling experience that users preferred to several big name shopping sites that will remain nameless.

Bonus tip: There’s always the temptation to borrow ideas from a big competitor with the excuse, “well, so and so is doing it, and they’re successful, so it must be right!” Guess what? Sometimes other companies are successful for a lot of reasons other than that thing you’re stealing from them. Make sure users like that part of a competitor's product before using it in your own.

Super Targeted Usability Testing

Typically, when conducting usability tests, we’ll run several sessions on an entire product with lots of scenarios and tasks. But often that generates a ton of data that you have to wade through and analyze.

Instead, try doing a few sets of three 10-15 minute tests on a very specific feature. That’s a lot of numbers in a row. How about an example?

When we were building Crave, we wanted to test a particular new feature that we thought users would love. When we actually launched it, we were a little concerned that it would be hard to find, so immediately after launch, we ran three quick, unmoderated user tests with one task.

As we suspected, all three users had some trouble finding the feature. We immediately created a contextual help bubble that guided interested users to the feature. Then we ran three more tests. None of the new users had any problems at all.

The entire process took about three hours, and users regularly tell us how much they like that feature.

Bonus Tip: Using unmoderated user testing services like UserTesting.com and TryMyUI.com (and about a dozen others), make testing like this fast and cheap. You can test, build, deploy, and iterate several times in a single day. If you don’t do continuous deployment, you can use them to test high fidelity prototypes rather than your actual product.

Purely Observational Testing

This type of research is the exact opposite of the last one, because you’re not always testing a very specific part of your interface or a brand new feature.

Sometimes you’re trying to generate ideas for what you could do next that would give you the biggest ROI. For example, you might know that there’s a problem somewhere in your metrics, and you’re trying to understand what pain points are causing the drop off.

Whatever the reason, one of the most enlightening things you can do is a purely observational test. This means sitting down, shutting up, and watching people use your software in whatever way they want to do it.

You don’t give them tasks or scenarios. You just schedule them for a time they’d normally be using your product and arrange to observe them, remotely or in person.

Bonus Tip: Make sure to do this with new users, power users, and occasional users, as well as people who fit in all of your various persona groups. This will give you a fabulous overview of what people are really doing with your product.

Micro-Usability Tests

These are quite different, and some don’t fall neatly into the qualitative testing realm, but they can be quite useful.

Navigation Tests
When we were building Crave, we obviously wanted to make sure that things were incredibly easy to purchase, since that’s where we’d make money.

Since we had wireframes and visual mockups of the screens, we simply loaded them into UsabilityHub’s NavFlow and asked users to show us how they’d purchase something.

After a couple of tests, we knew exactly where in the purchase funnel we needed to improve things before ever even had a real funnel!

Landing Page Tests
Another type of Micro-usability test can help you fine tune your messaging.

Ever run one of those landing page tests where you compare two different messages and see which one results in more conversion? Ever wonder why the winner was the winner?

This is one of those questions that’s not very cost effective to answer with standard usability testing, since what you really want is a couple of minutes of testing with a lot of different users rather than an hour with just a few users.

Luckily, you can post a screen on FiveSecondTest with a few simple questions like, “What does this product do?” and “Who is this product for?” and get extremely cheap feedback about people’s first impression of your landing page.

Now you’ll not only know which version won, but you’ll have a better idea of WHY it won. Different tests that we ran at Crave showed that some messages led people to believe that the site was about “buying and selling” while others led people to believe it was about “sharing” or “meeting people.” And, of course, some messages didn’t mean anything to anybody. We didn’t use those.

Bonus Tip: As with everything, I like running smaller versions of these micro-usability tests iteratively. With the FiveSecondTests, I might run each version of the page with 15 people and then update the messaging until I get a landing page where the vast majority of respondents understand exactly what my product is selling. 


Do These Techniques Replace Usability Testing?

Seriously? Is that even a question? Of course not!

You still need to do regular usability testing and conduct standard customer development interviews. You still need to get out and ask your users questions and have them perform predefined tasks and talk about their problems.

But the next time you want a particular question answered, think a little harder about the best way to answer it and the best tools to use.

Like the post? Follow me on Twitter!

Or check out my presentation on DIY User Research for Startups.