Monday, February 25, 2013

Don't Make Your Users Feel Like Idiots

I’m a smart person. I’ve been using the Internet since the early 1990s. I know how to program. I only feel the need to point this out, because I’m about to share with you a story in which I come across as a complete, blithering idiot, and I’m feeling a little defensive about it.

I got an email from an event that I won’t name, but I’m guessing a few of you are getting emails of your own. If you didn’t make the same mistake, then bask in the glory of being better at computers than I am. If you did make the same mistake, welcome to the club. You’re not alone.

The email I received was several paragraphs long and told me all the places and times where I could pick up my badge for the event. It also said that they were introducing something new this year called a QuickCode. The email instructed me to bring my photo id and my QuickCode to pick up my badge.

Then it had the following line:
Laura Klein’s QuickCode:

That’s it. After that, it went on to give me more badge-related information.  “Aha, I thought. The automated system has failed to print my QuickCode.”

I immediately wrote back and said that I didn’t get my QuickCode. To the credit of the organization, I was immediately written back to by a very polite person who explained that the QuickCode was an image and even gave me instructions on how to turn on images in my email, in case I didn’t know how.

I was, as you might imagine, embarrassed. I mean, of course I know how to show images in an email. I just want to make that clear, because I’m coming off as enough of an idiot without you thinking I can’t use Gmail.

What I didn’t know was that the QuickCode was an image. Because I’ve never seen a QuickCode. Because a QuickCode wasn’t a thing to me until an hour ago. Because a QuickCode is just a name that somebody made up for a bar code that they’re using to help with their badging system.

Obviously the people writing the email knew what a QuickCode was, so it wasn’t at all surprising to them that you’d have to turn on images to see one. For those of us (ok, me) who had never heard of a QuickCode, this wasn’t immediately obvious. A QuickCode could just as easily have been a string of numbers and letters that could have been printed in the email. Of course, when I went back and re-read the email, the first paragraph did mention “scanning” the quick code, so I might have figured out what it was, but there were a lot of paragraphs in the email that I quickly skimmed. This is not unusual user behavior.

The interesting thing is that they could have avoided my acting like an idiot and subsequently having to deal with my support email by just including the phrase, “If you don’t see your QuickCode, try turning on images in your email.” They could have made that the alt text for the image so only people who didn’t have images turned on would see it.

Why am I telling you all this? I’m telling you this because we make assumptions of this sort in our interfaces every day. We assume people know that a QuickCode is an image, even though they’ve never heard of a QuickCode. We assume people know what our products do, even though they’ve never heard of our product. We assume people know where to go within our products to find the things they’re looking for, even though they weren’t in the meeting where we determined our product structure.

We are almost always wrong.

The moral of this story is not (just) that your users are going to do stupid things sometimes. It’s not even that they’re probably only going to skim our very long emails. The moral is that we constantly need to be asking ourselves what we really expect a user to understand about our product, and we need to have ways to preemptively help them in places where we’re presenting new concepts or unfamiliar terminology.

Users don’t know our slang. They don’t know our jargon. They don’t know our product. If we want them to use our products successfully, we need to teach them what they need to know without making them feel like idiots.

Wednesday, February 20, 2013

Combining Qualitative & Quantitative Research


Designers are infallible. At least, that’s the only conclusion that I can draw, considering how many of them flat out refuse to do any sort of qualitative or quantitative testing on their product. I have spoken with designers, founders, and product owners at companies of all sizes, and it always amazes me how many of them are so convinced that their product vision is perfect that they will come up with the most inventive excuses for not doing any sort of customer research or testing. 

Before I share some of these excuses with you, let’s take a look at the types of research I would expect these folks to be doing on their products and ideas.

Quantitative Reserach

When I say quantitative research in this context, I’m talking about a/b testing, product analytics, and metrics - things that tell you what is happening when users interact with your product. These are methods of finding out, after you’ve shipped a new product, feature, or change, exactly what your users are doing with it. 

Are people using the new feature once and then abandoning it? Are they not finding the new feature at all? Are they spending more money than users who don’t see the change? Are they more likely to sign up for a subscription or buy a premium offering? These are the types of questions that quantitative research can answer. 

For a simple example, if you were to design a new version of a landing page, you might run an a/b test of the new design against the old design. Half of your users would see each version, and you’d measure to see which design got you more registered users or qualified leads or sales or any other metric you cared about.

Qualitative Research

By qualitative testing, I mean the act of watching people use your product and talking to them about it. I don’t mean asking users what you should build. I just mean observing and listening to your users in order to better understand their behavior. 

You might do qualitative testing before building a new feature or product so that you can learn more about your potential users’ behaviors. What is their current workflow? What is their level of technical expertise? What products are they already using? You might also do it once your product is in the hands of users in order to understand why they’re behaving the way they are. Do they find something confusing? Are they getting lost or stuck at a particular point? Does the product not solve a critical problem for them? 

For example, you might find a few of your regular users and watch them with your product in order to understand why they’re spending less money since you shipped a new feature. You might give them a task in order to see if they could complete it or if they got stuck. You might interview them about their usage of the new feature in order to understand how they feel about it. 


Excuses, Excuses

While it may seem perfectly reasonable to want to know what your users are really doing and why they are doing it, a huge number of designers seem really resistant to performing these simple types of research or even listening to the results. I don’t know why they refuse to pay any attention to their users, but I can share some of the terrible excuses they’ve given me. 


A/B Testing is Only Good for Small Changes

I hear this one a lot. There seems to be a misconception that a/b testing is only useful for things like button color and that by doing a/b testing you’re only ever going to get small changes. The argument goes something like, “Well, we can only test very small things and so we will test our way to a local maximum without ever being able to really make an important change to our user experience.”
This is simply untrue.

You can a/b test anything. You can show two groups of users entirely different experiences and measure how each group behaves. You can hide whole features from users. You can change the entire checkout flow for half the people buying things from you. You can test a brand new registration or onboarding system. And, of course, you can test different button colors, if that is something that you are inclined to do.

The important thing to remember here is that a/b testing is a tool. Itʼs agnostic about what youʼre testing. If youʼre just testing small changes, youʼll only get small changes in your product. If, on the other hand, you test big things - major navigation changes, new features, new purchasing flows, completely different products - then youʼll get big changes. And, more importantly, you’ll know how they affected your users. 


Quantitative Testing Leads to a Confused Mess of an Interface

This is one of those arguments that has a grain of truth in it. It goes something like, “If we always just take the thing that converts best, we will end up with a confusing mess of an interface.”
Anybody who has looked at Amazonʼs product pages knows the sort of thing that a/b testing can lead to. They have a huge amount of information on each screen, and none of it seems particularly attractive. On the other hand, they rake in money.

Itʼs true that when youʼre doing lots of a/b testing on various features, you can wind up with a weird mishmash of things in your product that donʼt necessarily create a harmonious overall design. You can even wind up with features that, while they improve conversion on their own, end up hurting conversion when they’re combined. 

As an example, letʼs say youʼre testing a product detail page. You decide to run several a/b tests simultaneously for the following new features:
  • 
customer photos

  • comments
  • ratings

  • extended product details

  • shipping information

  • sale price

  • return info
Now, letʼs imagine that each one of those items, in its own a/b test, increases conversion by some small, but statistically significant margin. That means you keep all of them. Now youʼve got a product detail page with a huge number of things on it. You might, rightly, worry that the page is becoming so overwhelming that youʼll start to lose conversions.

Again, this is not the fault of a/b testing – or in this case, a/b/c/d/e testing. This is the fault of a bad test. You see, itʼs not enough that you run an a/b test. You have to run a good a/b test. In this case, just because the addition of a particular feature to your product page improved conversions doesn’t mean that adding a dozen new features to your product page will increase your conversion. 

In this instance, you might be better off running several a/b tests serially. In other words, add a feature, test it, and then add another and test. This way you’ll be sure that every additional feature is actually improving your conversion. Alternatively, you could test a few different versions of the page with different combinations of features to see which converts best. 


A/B Testing Takes Away the Need For Design

For some reason, people think that a/b testing means that you just randomly test whatever crazy shit pops into your head. They envision a world where engineers algorithmically generate feature ideas, build them all, and then just measure which one does best.

This is just absolute nonsense.

A/B testing only specifies that you need to test new designs against each other or against some sort of a control. It says absolutely zero about how you come up with those design ideas.

The best way to come up with great products is to go out and observe users and find problems that you can solve and then use good design processes to solve them. When you start doing testing, youʼre not changing anything at all about that process. Youʼre just making sure that you get metrics on how those changes affect real user behavior.

Letʼs imagine that youʼre building an online site to buy pet food. You come up with a fabulous landing page idea that involves some sort of talking sock puppet. You decide to create this puppet character based on your intimate knowledge of your user base and your sincere belief that what they are missing in their lives is a talking sock puppet. Itʼs a reasonable assumption.

Instead of just launching your wholly re-imagined landing page, complete with talking sock puppet video, you create your landing page and show it to only half of your users, while the rest of your users are stuck with their sad, sock puppet-less version of the site. Then you look to see which group of users bought more pet food. At no point did the testing process have anything to do with the design process. 

Itʼs really that simple. Nothing about a/b testing determines what youʼre going to test. A/B testing has literally nothing to do with the initial design and research process. 

Whatever youʼre testing, you still need somebody who is good at creating the experiences youʼre planning on testing against one another. A/B testing two crappy experiences does, in fact, lead to a final crappy experience. After all, if youʼre looking at two options that both suck, a/b testing is only going to determine which one sucks less.

Design is still incredibly important. It just becomes possible to measure designʼs impact with a/b testing.


There’s No Time to Usability Test

When I ask people whether they’ve done usability testing on prototypes of major changes to their products, I frequently get told that there simply wasn’t time. It often sounds something like, “Oh, we had this really tight deadline, and we couldn’t fit in a round of usability testing on a prototype because that would have added at least a week, and then we wouldn’t have been able to ship on time.” 

The fact is you don't have time NOT to usability test. As your development cycle gets farther along, major changes get more and more expensive to implement. If you're in an agile development environment, you can make updates based on user feedback quickly after a release, but in a more traditional environment, it can be a long time before you can correct a big mistake, and that spells slippage, higher costs, and angry development teams. Even in agile environments, it’s still faster to fix things before you write a lot of code than after you have pissed off customers who are wondering why you ruined an important feature that they were using. 

I know you have a deadline. I know it's probably slipped already. It's still a bad excuse for not getting customer feedback during the development process. You're just costing yourself time later. I’ve never known good usability testing to do anything other than save time in the long run on big projects.


Qualitative Research Doesn’t Work Because Users Don’t Know What They Want

This is possibly the most common argument against qualitative research, and it’s particularly frustrating, because part of the statement is quite true. Users aren’t particularly good at coming up with brilliant new ideas for what to build next. Fortunately, that doesn’t matter. 

Let’s make this perfectly clear. Qualitative research is NOT about asking people what they want. At no point do we say, “What should we build next?” and then relinquish control over our interfaces to our users. People who do this are NOT doing qualitative research. 

Qualitative research isn’t about asking people what they want and giving it to them. Qualitative research is about understanding the needs and behaviors of your users. It’s about really knowing what problem you’re solving and for whom.

Once you understand what your users are like and what they want to do with your product, it’s your job to come up with ways to make that happen. That’s the design part. That’s the part that’s your job.


It’s My Vision - Users Will Screw it Up

This can also be called the "But Steve Jobs doesn't listen to users..." excuse. 

The fact is, understanding what your users like and don't like about your product doesn't mean giving up on your vision. You don't need to make every single change suggested by your users. You don't need to sacrifice a coherent design to the whims of a user test. You don’t even need to keep a design just because it converts better in an a/b test. 

What you do need to do is understand exactly what is happening with your product and why. And you can only do that by gathering data. The data can help you make better decisions, but they don’t force you to do anything at all.


Design Isn’t About Metrics

This is the argument that infuriates me the most. I have literally heard people say things like, “Design can’t be measured, because design isnʼt about the bottom line. Itʼs all about the customer experience.”

Nope.

Wouldnʼt it be a better experience if everything on Amazon were free? Be honest! It totally would. 

Unfortunately, it would be a somewhat traumatic experience for the Amazon stockholders. You see, we donʼt always optimize for the absolute best user experience. We make tradeoffs. We aim for a fabulous user experience that also delivers fabulous profits.

While itʼs true that we donʼt want to just turn our user experience design over to short term revenue metrics, we can vastly improve user experience by seeing which improvements and features are most beneficial for both users and the company.

Design is not art. If you think that thereʼs some ideal design that is completely divorced from the effect itʼs having on your companyʼs bottom line, then youʼre an artist, not a designer. Design has a purpose and a goal, and those things can be measured.


So, What’s the Right Answer?

If you’re all out of excuses, there is something that you can do to vastly improve your product. You can use quantitative and qualitative data together. 

Use quantitative metrics to understand exactly what your users are doing. What features do they use? How much do they spend? Does changing something big have a big impact on real user behavior?

Use qualitative research to understand why your users do what they do. What problems are they trying to solve? Why are they dropping out of a particular task flow when they do? Why do they leave and never come back.

Let’s look at an example of how you might do this effectively. First, imagine that you have a payment flow in your product. Now, imagine that 80% of your users are not getting through that payment flow once they’ve started. Of course, you wouldn’t know that at all if you weren’t looking at your metrics. You also wouldn’t know that the majority of people are dropping out in one particular place in the flow.

Next, imagine that you want to know why so many people are getting stuck at that one place. You could do a very simple observational test where you watch four or five real users going through the payment flow in order to see if they get stuck in the same place. When they do, you could discuss with them what stopped them there. Did they need more information? Was there a bug? Did they get confused?

Once you have a hypothesis about what’s not working for people, you can make a change to your payment flow that you think will fix the problem. Neither qualitative nor quantitative research tells you what this change is. They just alert you that there’s a problem and give you some ideas about why that problem is happening. 

After you’ve made your change, you can run an a/b test of the old version against the new version. This will let you know whether your change was effective or if the problem still exists. This creates a fantastic feedback loop of information so that you can confirm whether your design instincts are functioning correctly and you’re actually solving user problems. 

As you can hopefully see from the example, nobody is saying that you have to be a slave to your data. Nobody is saying that you have to turn your product vision or development process over to an algorithm or a focus group. Nobody is saying that you can only make small changes. All I’m saying is that using quantitative and qualitative research correctly gives you insight into what your users are doing and why they are doing it. And that will be good for your designs, your product, and your business.


Like the post? 

Monday, February 4, 2013

Make Meetings Less Awful

Meetings are the worst. I mean, my God, they suck. The vast majority of meetings are simply awful. 

But they don’t have to be!

If you’ve ever been in a meeting where you felt like your soul was being sucked out of your body through your eyes, I have a few tips that will make future meetings more tolerable. If you implement them correctly, they might even make some of your meetings useful! Imagine that. 


Write It Down Ahead of Time

Agendas. You should have one. Well, this seems painfully obvious, doesn’t it? But seriously. How many meetings do you attend where there isn’t a single person who knows exactly what you’ll be talking about in the meeting beforehand? 

Here’s a simple solution for making meetings wildly more productive. The person who is in charge of the meeting needs to make an agenda and send it out to all the attendees before the meeting. A full day is great, especially if there are things that people might want to research in preparation for the meeting. Even a few hours is helpful. It’s best if the person in charge reaches out to attendees early to see if they have anything they’d like to see on the agenda. 

The corollary to this is that the meeting attendees must actually read the agenda, understand what will be discussed, and come to the meeting prepared to discuss and make a decision on any of the agenda items they care about. 

And, of course, if they don’t care about any of the agenda items, they probably shouldn’t attend the meeting. 

Another, slightly more spontaneous, method is the box on the whiteboard. We used to do this in engineering meetings at IMVU. Before the weekly eng meeting started, people could add topics they wanted to discuss to a list on the whiteboard. Once the meeting started, someone drew a box around the list. Nothing could be added to the list once we started, and nothing was discussed that wasn’t in the box. As a bonus, it encouraged people to get to the meeting early if they had a topic to discuss. 


Everything Has a Next Step 

Meetings are not open ended discussion forums. They’re not group therapy sessions. Meetings are for making decisions. Every single thing you discuss in a meeting should have an decision and a deliverable. 

Here’s an example. Once, I was in a meeting to talk about a change somebody wanted to make to a product’s design. We sat together for half an hour discussing the types of research she could do to figure out whether the design would work or whether it was small enough just to ship. At the end of about 30 minutes, she announced, “Well, I don’t think we’re going to decide this now.” To which I responded, “Why the hell not?”

Stop having discussions just to have discussions. Refusing to make a decision in this meeting just ensures that you need to have another meeting later, and nobody wants that. Make sure that all agenda items at meetings have outcomes. Sometimes the outcome will be, “Susan is going to go off and investigate these three questions and report back so that we can make a more informed decision.” Sometimes the outcome will be, “Laura is in charge of building a prototype and will pull in whomever she needs to help.” Sometimes the outcome will be, “We’re shipping this damned thing as soon as we leave the room.” I kind of wish that were always the outcome.

The outcome will never be, “Well, we need to think more about this.” The problem with this statement is that it’s too vague. There is nothing actionable about this. Nobody is assigned to do anything, so nothing will really get done, and the next time the point comes up, you’ll have to have the whole conversation over again. Everything from a meeting needs a specific next step and somebody who is assigned to take it. 


Fewer Attendees

Meetings become far less productive after about four people, so whenever possible, keep meetings as small as you can. Obviously you sometimes need to have more folks, but really ask yourself whether everybody needs to be in the meeting, or if somebody would do just as well with a quick report after the fact.

If there are people who routinely aren’t contributing to the meeting in any way - no agenda items, no adding to the discussion, no making decisions, no deliverables after the fact - then they are great candidates for not getting an invitation next time. Presumably you’re paying these people, and I have to imagine there is something more productive they could be doing than sitting in a meeting checking their email.


Every Meeting Has a Leader

Someone has to be in charge of the meeting. Always. 

The person in charge of the meeting has a lot of responsibilities. The leader must make the agenda, keep everybody on track, mediate disputes, ensure that everybody who has a contribution gets to make that contribution, make sure that all the deliverables and next steps are being captured, and follow up on the things that come out of the meetings. 

I was in a meeting once that was led by a particularly ineffective PM. We were discussing what the priorities would be for her product (don’t even get me started on why engineers and designers were discussing this when it was so clearly her job). We were each giving our opinions about what should be done first, and the discussion began to get heated. 

Instead of stepping in and guiding the discussion or just deciding what order we’d build things in, the PM sat back and let everybody scream at each other. The meeting ended with someone in tears (unsurprisingly, this person wasn’t me) and no decision made about prioritization. 

Unless somebody is in charge, meetings just meander and go on for three times as long as they need to with nobody who is willing or able to say, “Right. We’re done here. Let’s go do something productive.” Having someone whose job it is to end discussion and assign tasks makes things go much more smoothly and quickly. 

Besides, if we actually expected some work from the people who call all those meetings, maybe they’d call fewer damned meetings. 


No Broadcast Meetings

I’m going to assume that everybody working for your company is literate. If this is true, please stop having meetings where you read things to them. You’re not in kindergarten. This is not story time. 

I have been to too many meetings where a PM or CEO or somebody else who should know better shows a slide deck and then proceeds to read all the slides to the audience for an hour. 

Here’s an idea: send the deck out the day before. Tell people to read it for themselves and come up with questions. At the meeting, spend no more than five minutes summarizing the most important things about the slide deck (“We made more money this month than last month! Yay!”), then take questions from the audience about the rest of the deck. 

If you are concerned that people will miss critical information because they are failing to read important emails, that’s really something that you need to address separately. I’ve found that reducing meeting times by a few hours a week gives people far more time to read their email or to do something actually productive. 


More Discussions, More Working Sessions, Fewer Meetings

You know what I like more than meetings (besides everything)? I like discussions. Discussions are things that happen between two or three people who are all interested in and informed about a particular topic. They tend to happen in hallways and they often help disseminate important information to the people who need it. 

I also like working sessions, in which a few people all work together on something like a design or code. Working sessions generally involve a lot of writing on whiteboards or pair programming or gathering around somebody’s screen to try different variations of a particular wireframe. Working sessions are better than even good meetings because by the end of the working session, you’re often done with whatever it was you were going to just talk about in the meeting. 

And maybe that’s the most important point here. Meetings are not conducive to DOING. They are conducive to TALKING. Talking is the enemy of doing. By making a few small changes in the way you conduct your meetings, you can turn them into places where things get done rather than just talked about. And that will make meetings suck a whole lot less. I promise.