Wednesday, May 24, 2017

Pivot or Persevere? Find Out Using Lean Experiments

The Lean Startup approach is gaining popularity in organisations of all sizes, which means teams must adapt their processes. More and more, UX professionals are being asked to take on Lean experiments – which are fantastic – but differ slightly from traditional UX research.

To recap, “Lean Startup” is a business approach that calls for rapid experimentation to reduce the risks of building something new. The framework has roots in the Lean Manufacturing methodology and mirrors the scientific method. It calls for very UX-friendly processes, such as collecting iterative feedback and focusing on empirical measurement of performance indicators.

One of the core principles is to iterate through a cycle known as Build-Measure-Learn, which includes building a minimum viable product (MVP) to test, measure what happens, and then decide whether to move forward with the suggested solution (persevere) or find another (pivot).

Simple in theory. But it can be challenging to figure out what MVP to build, how to interpret the data collected and what next steps should be after completing a lean experiment. These guidelines will help you get the most out of your experimentation cycles and understand whether you should pivot or persevere.

Consider the context

The most important part of data analysis starts before you’ve gathered any data. To help you decide what type of research to do, you first need to consider where you are in the progress of your product, what information you already have, and what are the biggest, riskiest open questions.

In the conceptual stages of a totally new business or feature idea, you first need to understand enough about your potential user base and their needs to make informed hypotheses about the problems they have and how you might be able to address them. Any idea for a new thing is an assumption, and doing some generative research will help you shape and prioritise your assumptions.

The Lean Startup approach advocates for starting with a process called GOOB – Getting Out Of the Building – and looks a whole lot like a condensed version of traditional ethnography and interviews. The goal is to talk to a small number of people who you think fit your target audience and understand their current needs, experience gaps, pain points, and methods for solving existing problems related to your idea.

Run these interviews just like any other UX interview and use the data to create a list of assumptions about your target users, potential problems to solve, and ways you could address those problems.  Start with a period of exploration and learning before you build anything.

Prioritising what to explore

Your list of assumptions can serve as your backlog of work. Rather than creating a list of necessary features to build, treat each item in the list as a separate hypothesis to explore and either prove or disprove. Then, prioritise the hypotheses that are the riskiest, or would have the biggest impact if your assumption is wrong. Assumptions about what the problem is and for what people should be higher in priority over assumptions about how to solve any problems or build any features.

Typical assumptions might look something like this:

I believe [___] set of people are facing [___] challenge.

I believe [___] solution could help address [___] problem better than my users’ current workaround.

I believe [___] solution could generate money in [___] way.

For instance, let’s say that you’re trying to create a new application to help busy parents plan meals. You’ve interviewed a dozen busy parents and have some insight that says the two biggest issues they face are deciding what to cook and finding time to buy all the ingredients/groceries.You might have a hunch about which direction to go, but your first test should be centred around figuring out which of these issues is more compelling to your users.

Setting hypotheses

The next step is to craft a precise hypothesis that will make it very easy to tell whether you’ve proved or disproved your assumption.

I like to use the following framework for creating hypotheses:

The do, build, provide section to refers to the solution. This could be as high-level as deciding which type of app to build, or as specific as the type of interaction to develop for a particular interface.

These people should represent your assumed customer archetypes, derived from your initial interviews and other data.

The desirable outcome should be something that correlates to business success, such as sending a message or ordering an item. Keep in mind that it’s easy to come up with outcomes that look good, but don’t really tell you anything. These are called vanity metrics. For instance, if I want people to make a purchase on an ecommerce site, it’s not really that helpful to know how many people decided to follow us on Facebook. Instead, focus on identifying the pieces of information that help you make a decision and that give you a true indication of business success.

The actionable metric is whatever will tell you that your investment into building this item will be worth it. Actionable metrics can be a little tricky, especially early on, but I like to try to set these metrics as the barometers of the minimum amount of success you need to prove that the investment will be worthwhile. You can look at both perceived cost of investment and perceived value to gauge this.

Let’s say you work at an ecommerce company and you’re proposing a new feature that you hope will increase last-minute additions to a cart. You could ask the development team to estimate how much effort it would take to build out the feature, then work backward from that cost to see how much the average order size would have to increase to offset the costs.

If the team estimates something would take about 5 weeks and will cost $25,000, you’ll need the change to make at least that much money in that amount of time. So then let’s say you also know that the company usually has 1,000 sales a day and the average order size is $20. That means that right now, the company makes $20,000 a day. In order to offset the $25,000 estimated development dollars over 5 weeks, the change you make would have to bring in an extra $5,000 per week. This means that your average order size would have to go up $5 to $25. All the additional money earned after the offset is additional profit for the company.

That was all a lot of math, and you don’t always have that much information at your fingertips, especially when you’re very early on in the product development process. You might have to just make an educated guess about what sort of number would be “good enough.” The point is to try to pick a metric that will truly help inform you about whether or not you should invest in the new change or not.

Sometimes it’s easier to conceptualise this as a fail condition, or point at which it wouldn’t be worth moving forward. In other words, you can frame it as: “if we don’t make at least x% more on each order after, we won’t implement the full version of the feature.” Then you can work backwards to craft a testable hypothesis.

Of course, this framework can be adjusted as needed, but you need to clearly define the exact question you’re exploring and what success looks like. If you can’t come up with a clear hypothesis statement, go back and re-evaluate your assumption and narrow it down so you can run a successful experiment.

Design your experiment

Once you have a clear single question to answer and hypothesis, deciding what sort of experiment to run should be fairly straightforward.

Let’s revisit the meal planning application example. Say that you’ve decided your riskiest assumption is which of the two core problems is more compelling to users.

A hypothesis might look something like this:

If we build an app that automatically generates 5 recipe ideas per week,

Then busy parents,

Will be interested in downloading this application.

We’ll know this is true when we present them with a variety of food-related apps and they choose the recipe generation app at least 15 percentage points more often, for example, than any other choice.

Now you can focus on designing a way to test which apps a user would be most interested in using. There is no one exact way to do this. You could create fake landing pages for each potential solution and see how many people sign up for each fake product, or create ads for the different apps and see which one generates most actions. You should focus on finding the smallest thing your team can build in order to test your hypothesis – the minimally viable product.

In this case, a good MVP might be a mockup of a page with blurbs of a few different fake applications you haven’t built yet. Then you could use a click-tool like usabilityhub to ask participants to choose a single app to help them with meal planning and then monitor how many clicks each concept gets. This way, you don’t even need to launch live landing pages or ad campaigns, just create the page mock-up.

Frequently used lean experiment types/MVPs include:

  • Landing page tests
  • Smoke tests such as explainer video, shadow feature, or coming soon pages
  • Concierge tests
  • Wizard of Oz tests
  • Ad tests
  • Click tests

These are just a few suggestions, and there are many more experiments you can run depending on your context and what you’re trying to learn. Use these suggestions as starting places not step-by-step directions for figuring out the right experiment for your team.

Analysing your results

If you’ve set a clear and concise hypothesis and run a well-designed experiment, it should be clear to see if you’ve proved or disproved your hypothesis.

Looking at the meal planning app example again, let’s say you ran the click test with 1,000. You included 4 app concepts in the test, and hypothesised that concept A would be the most compelling.

If Concept A receives 702 clicks, Concept B receives 98 clicks, Concept C receives 119 clicks, and Concept D receives 81 clicks, it’s very obvious that you proved your hypothesis. You can persevere, or move forward with concept A, and then focus on to testing your next set of assumptions exploring that concept. Maybe now is the time to tackle an assumption about the app’s core feature set.

On the other hand, if Concept A receives 45 clicks, Concept B receives 262 clicks, Concept C receives 112 clicks, and Concept D receives 581 clicks, you obviously disproved your hypothesis. Concept A is clearly not the most compelling concept and you should pivot away from that idea.

In this case, you also have a clear indication of the direction of your pivot – choice D is a clear winner. You could set your new assumption that concept D is a compelling direction and run another experiment to verify this assumption, perhaps by running a similar test to compare it against just one other concept or by setting up a landing page test. Or you could do more customer interviews to find out why people found that concept so compelling.

While sometimes there’s an obvious winner, it’s not always clear which way the scales tip. 

But what if Concept A receives 351 clicks, Concept B receives 298 clicks, Concept C receives 227 clicks, and Concept D receives 124 clicks? There’s no clear winner or direction. Did you set up a bad test? Are none of your concepts compelling? Or all of them? What next?

The short answer is that you don’t know. But the great thing about lean experiments is that the system is designed such that your next step should be running more experiments. In failing to find a winning direction, you succeeded in learning that your original assumption was incorrect, and you didn’t need to invest much to figure that out. You now know that you need to pivot, you just may not be sure in which direction.

Which way to pivot?

If you know that you need to pivot but are unsure what direction to take, my first suggestion is to run another related experiment to verify your initial findings.

In the food example, you could try a similar test with just 3 options and see if the outcomes change, or try running landing pages for all 4 concepts. While you don’t want to be falsely optimistic, you also want to be sure that there wasn’t something about the way you ran your test or a fluke in the data that is giving you a false impression. Since lean experiments are intentionally quick and not robust, they can sometimes lack the rigour to give you true confidence. If you have a true finding, you should be able to replicate results with another test.

If you run another test and get similarly inconclusive data or truly have no idea what direction to go next after running an experiment, try stepping away from lean experimentation and go back to exploratory research methods.

A successful pivot can be any kind of change in business and product model, such as a complete reposition to a new product or service, a single feature becoming the focus of a product, a new target user group, a change in platform or channel, or a new kind of revenue or marketing model. A structured experiment is not going to teach you what direction to go, so you need to do some broader, qualitative data gathering.

I recommend running interviews with two subsets of people. First, talk to people who love your product/service and are most often taking the option that you want, such as purchasing frequently, and find out what they love about you and why. Then, if possible, talk to the people who are not taking desired actions, to try and find out why, or what they’re looking for instead. These types of interviews will be just like any other discovery interview, and you’ll be looking for the participants to guide you to new insights that can lead to your next set of assumptions to test.

Conclusion

Lean experiments are a great way to get any organisation learning from their customers and poised to make valuable changes. Getting used to the ins and outs of setting clear hypotheses and learning whether to pivot or persevere can take some time, but luckily those of us in UX already have the skill sets to do so successfully. Go forth and experiment!

Got a question on running lean experiments? Ask Amanda when she joins us for our next Ask the UXperts session in our Slack Channel, 3pm Thursday 25 May PDT or 8am Friday 26 May AEST: http://ift.tt/2qyEPtZ

The post Pivot or Persevere? Find Out Using Lean Experiments appeared first on UX Mastery.


by Amanda Stockwell via UX Mastery

No comments:

Post a Comment