U
D

Conversation

the

12/20/11 | Uncategorized

Know Your Users: Quantitative Research VS. Qualitative Research

By Laura Klein (Principal, Users Know)
I have written about qualitative vs quantitative research before, but I still get a lot of questions about it. To answer some of those questions, I want to do a bit of a deeper dive here and give a few examples to help startups answer the key question.

To be clear, that key question is “when should I use qualitative research, and when should I use quantitative research for the best results?” Another way of looking at this is, “when should I be listening to users, and when should I just be shipping code and looking at the metrics?”

The real answer is that you should do both constantly, but there are times when one is significantly more helpful than the other.

I will continue to repeat my cardinal rule:

  • Quantitative research tells you WHAT your problem is.
  • Qualitative research tells you WHY you have that problem.

Now, let’s look at what that actually means to you when you’re making product decisions.

A One Variable Change

When you’re trying to decide between qualitative and quantitative testing for any given change or feature, you need to figure out how many variables you’re changing.

Here’s a simple example: You have a product page with a buy button on it. You want to see if the buy button performs better if it’s higher on the page without really changing anything else. Which do you do? Qualitative of quantitative?

That’s right, I said this one was simple. There’s absolutely no reason to qualitatively test this before shipping it. Just get this in front of users and measure their actual rate of clicking on the button.

The fact is, with a change this small, users in a testing session or discussion aren’t going to be able to give you any decent information. Hell, they probably won’t even notice the difference. Qualitative feedback here is not going to be worth the time and money it takes to set up interviews, talk to users, and analyze the data.

More importantly, since you are only changing one variable, if user behavior changes, you already have a really good idea WHY it changed. It changed because the call to action (CTA) button was in a better place. There’s nothing mysterious going on here.

There’s an exception! In a few cases, you are going to ship a change that seems incredibly simple, and you are going to see an enormous and surprising change in your metrics (either positive or negative). If this happens, it’s worth running some observational tests with something like UserTesting.com where you just watch people using the feature both before and after the change to see if anything weird is happening. For example, you may have introduced a bug, or you may have made it so that the button is no longer visible to certain users.

A Multi-Variable or Flow Change

Another typical design change involves adding an entirely new feature, which may affect many different variables.

Here’s an example: You want to add a feature that allows people to connect with other users of your product. You’ll need to add several new pieces to your interface in order to allow users to do things like find people they know, find other interesting people they don’t know, manage their new connections, and get some value from the connections they’ve made.

Now, you could simply build the feature, ship it, and test to see how it did, much the way you made your single variable change. The problem is that you’ll have no idea WHY it succeeded or failed – especially failed.

Let’s assume that you ship it and find that it hurts retention. You can assume that it was a bad feature choice, but often I find that people don’t use new features not because they hate the concept, but because the features are badly implemented.

The best way to deal with this is to prevent it from happening in the first place. When you’re making large, multi-variable changes or really rearranging a process flow for something that already exists on your site, you’ll want to perform qualitative testing before you ever ship the product.

Specifically, the goal here is to do some standard usability testing with interactive prototypes, so that you can learn which bits are confusing (ps. yes, there are confusing bits, trust me!) and fix them before they ever get in front of users.

Sure, you’ll still do an a/b test once you’ve shipped it, but give that new feature the best possible chance to succeed by first making sure you’re not building something impossible to use.

Deciding What To Build Next

Look, whatever you take from this next part, please do not assume that I’m telling you that you should ask your users exactly what they want and then build that. Nobody thinks that’s the right way to build products, and I’m tired of arguing about it with people who don’t get UCD or Lean UX.

However, you can learn a huge amount from both quantitative and qualitative research when you’re deciding what to build next.

Here’s an example: You have a flourishing social commerce product with lots of users doing lots of things, but you also have 15 million ideas for what you should build next. You need to narrow that down a bit.

The key here is that you want to look at what your users are currently doing with your product and what they aren’t doing with it, and you should do that with both qualitative and quantitative data.

Qualitative Approaches:

  • Watch users with your product on a regular basis. See where they struggle, where they seem disappointed, or where they complain that they can’t do what they want. Those will all give you ideas for iterating on current features or adding new ones.
  • Talk to people who have stopped using your product. Find out what they thought they’d be getting when they started using it and why they stopped.
  • Watch new users with your product and ask them what they expected from the first 15 minutes using the product. If this doesn’t match what your product actually delivers, either fix the product or fix the first time user experience so that you’re fulfilling users’ expectations.

Quantitative Approaches:

  • Look at the features that are currently getting the most use by the highest value customers. Try to figure out if there’s a pattern there and then test other features that fit that pattern.
  • Try a “fake” test by adding a button or navigation element that represents the feature you’re thinking of adding, and then measure how many people actually click on it. Instead of implementing an entire system for making friends on your site, just add a button that allows people to Add a Friend, and then let them know that the feature isn’t quite ready yet while you tally up the percentage of people who are pressing the button.

Still Don’t Know Which Approach to Take?

What if your change falls between the cracks here? For example, maybe you’re not making a single variable change, but it’s not a huge change either. Or maybe you’re making a pretty straight-forward visual design or messaging change that will touch a lot of places in the product but that doesn’t actually affect the user process too much.

As many rules as we try to make, there will still be judgement calls. The best strategy is to make sure that you’re always keeping track of your metrics and observing people using your product. That way, even if you don’t do exactly the right kind of research at exactly the right time, you’ll be much more likely to catch any problems before they hurt your business.

This post was originally posted at Users Know.

About the guest blogger: Laura Klein is a Principal at Users Know, helping you get to know your users and create better products. Her goal is to help lean startups and other small companies improve their connection to their users and design better products, working directly with startups as a member of the team, not only to design a great product, but also to help you learn how to involve your users in the design process. Follow her on Twitter at @lauraklein.

Editor

Editor

The Switch Editorial Team.

Straight to your inbox.

The best content on the future faces of tech and startups.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

SHARE THIS STORY

NEW COHORT STARTS JANUARY 2024

Join the Angel Sessions

Develop strategic relationships, build skills, and increase your deal flow through our global angel group and investing course.

RELATED ARTICLES
[yarpp]