The Four-Step Design Process for Building Products Customers Will Love

Sarah Harrison, a user experience design veteran at custom-fit online lingerie store True & Co., has developed a methodology for continuously optimizing designs that mixes subjective decision-making with empirical testing.


Harrison breaks what she’s learned about the design process into a four-step loop:

  • Always observe users using the product.
  • Know your metrics. Obsess over them.
  • Befriend your support team.
  • Run A/B tests.

And then repeat, improving the product continuously as the company grows and the tastes of customers change.

Step 1: Watching Customers Fiddle With Your Product

Everyone says they know how important customer development is. How many times have you heard, “Get out of the office and talk to customers”? But many designers forget this basic philosophy or find it too difficult.

Observing users is like flossing. People know they’re supposed to do it every day, but they don’t. Just do it — it’s not a big deal.

If you don’t have the resources to conduct formal paid user tests recorded with expensive equipment, don’t just give up on it entirely — you can go to a cafe or bar, open up your laptop to your live site or a test site, and ask someone to use it. If you have to cajole the person, offer to buy them a coffee or a beer.

Alternatively, Harrison suggests using remote user testing. She suggests userlytics.com or usertesting.com if you need third-party tools to facilitate the testing.

Once you get a person in front of your service in testing, watch them use the product. While you can instruct them to attempt to complete certain goals (like "add a bra you like to your shopping cart" in the case of True & Co.), it’s best not to tell them what to do. That defeats the purpose of the test. Once they’re finished, you can furnish them with a short survey to learn their thoughts and feelings regarding the test.

If you’re recording a test, edit it down. Your video’s most important audience is your fellow co-workers. Harrison says, “You’ll find these little ‘A-ha!’ moments and you can share them to relevant team members. Nobody’s going to sit through a 15-minute usability testing video. You are.” To get around that, edit down whatever usability testing videos you have to communicate the salient points.

A presentation can work better than a set of jumbled notes. Show, don’t tell. Suit the nature of the presentation based on what kind of co-worker you’re trying to help. It’s a lot faster to show a developer a 30-second clip of a test user struggling with a bug than it is to ask the developer to spend a half-hour of their expensive time watching the entire clip just to get to the one part that’s relevant. You can also collect clips like this for the future, organizing them into folders so that when you’re more likely to have time to work on new features, you can draw on that accumulated information.

Step 2: Knowing Your Metrics

It’s become common at startups for employees to obsess over metrics to figure out how the product is doing. As a designer, Harrison focuses on a particular set of metrics. She says, “We’re looking for metrics that help you get inside users’ heads. We’re trying to figure out what the user is doing and why the user behavior is driving the metrics in that way.” She tries to determine the user’s most likely motivation, and to develop ways for the system to better satisfy those intentions.

Usability problems on your site can be friction. Clutter on your site can be friction, or it can be their anxiety about giving you the information that you want.

Studying conversion funnels set up through systems like Google Analytics or Mixpanel is a large part of this work. ‘Conversions’ aren’t necessarily just products sold — it can be any arbitrary behavior that you’ve defined as desirable, like getting a user to provide their e-mail address or adding a product to a shopping cart. When looking for problems to solve, Harrison searches for sources of ‘friction’ where conversions suddenly drop off.

In particular, she examines conversion proportions. If 80% of new users convert for three steps in a funnel, but there’s a huge drop off at some point — for example, when you ask them to provide an e-mail address or a credit card number — it gives you a jumping-off point to start developing theories on how to improve conversions and develop new tests to accomplish that goal.

Harrison says that looking at conversion funnels for multiple devices is particularly useful for uncovering issues in the service. For example, when there’s an obvious breakdown in conversions for iPad users that doesn’t occur for Google Chrome desktop users, or an issue specific to a particular version of Android, it’s probably related to a bug that needs to be handled by the appropriate team. Checking aggregate conversions isn’t going to uncover those kinds of critical issues that might alienate entire categories of customer if they’re not fixed.

Fluctuations in usage are normal at startups, and determining the root causes is an ongoing responsibility of product owners. Harrison describes the experience: “You might see a big spike when you’re featured on Oprah and you might see a big drop when your site was really slow or broken. This happens all the time.” By studying patterns in usage over time, you can learn more about what your users want and how well you’re satisfying them. Users may tend to behave in certain ways on certain days or times of the week in different time zones. It may be that they prefer to buy certain kinds of products on Tuesday, and others on Friday.

Step 3: Befriend Your Customer Support Team

A high-and-mighty designer making friends with support staff? Even if none of them are wearing thick-framed glasses or black turtlenecks? 

Yes. It’s a good idea. You should do it.

Why?

Because support staff spend most of their days speaking with customers, and know all the running themes of the issues that those customers encounter using your product.

As soon as customers start having problems in large numbers, the support staff are the first people to know about it. They’ll be a lot more motivated to keep channels of communication going with the product team if they know that customer complaints are being addressed. The mentality of support staff is a bit different from that of a designer.

Harrison notes that they solve problems in the context of a call with a customer. They solve the customer’s individual problem and then move on to the next customer. In contrast, designers are problem-preventers.  But a close relationship and constant communication between support and design will not only lead to a better product, but also reduce customer service costs as you’ll be able to prevent issues before they become problems.

Use support staff as a sounding board to ask them about how they expect upcoming features will impact customers.  Also, when you do implement a new feature, ask if it solved the common customer issue that it was supposed to, rather than spawning a set of new fires to fight. Harrison says to be sure to give support staff a “heads up” when new features are being implemented. They need to know when changes are implemented so they know how to deal with issues that might come up.  It’s not fair to ambush them. And lastly, as a designer, both listen in and take some support calls yourself.  Empathy is one of the most critical qualities of great design — and there’s no better way to feel what the consumer feels than by getting on the phone with them to hear what’s not working.

Step 4: Always Be A/B Testing

The previous steps in Harrison’s process are there to better inform designers when it’s time to start testing solutions to all the issues that you’ve uncovered through extensive usability testing, understanding service metrics and checking all that against the reality of the problems that real customers deal with.

Harrison, like many designers, uses services like Optimizely to quickly test small changes in page designs, and then calculates the statistical significance of the test results to determine whether or not to go with a change (note, this is important: if something is not statistically significant be careful). If you’re changing a button size, altering copy, changing an image, altering a layout, fixing background colors, removing a border, or any other conceivable change, testing the alteration before committing to a change helps to control the risk and improves performance over time.

In addition, Harrison notes determining what level of statistical significance is good enough for your organization is an important judgment call. True & Co. only operates on 90% or better significance; another company might have a lower tolerance. Do not forget the importance of the significance you choose.

With A/B testing, even discovering that two sides of a test perform almost identically is a valuable result; it tells you that implementing a change is unlikely to be worthwhile.  Testing is an orientation for the entire company, not a one-off occurrence.

Once you’ve gone through this four-step process, don’t think you’re done.  Harrison says, “You have to do this over and over and over and over.”  This process encourages the company to continually re-orient around how users are actually interacting with the product and exposes whether or not changes to the product are improvements or just change for the sake of change.

 

 

Get Actionable Insights in Your Inbox

Start Reading