Test Big Strategies via Small Experiments

Posted by Kasey Jones on Mon, Dec 12, 2016

lean-analytics-team-metrics.jpg

“The best way to test a big idea or strategy is through small, fast, additive experiments.” – Ash Maurya, Scaling Lean Tweet this quote.

Lean startup methodology is predicated on the idea of making a minimal investment (and taking minimal risk) to learn the most valuable lessons. When it comes to creating highly effective strategies, the method is the same: creating small, fast, low-risk, highly informative tests.

How do you create “small” tests?

Small tests begin by getting to the heart of your strategy – what is the biggest, perhaps riskiest, assumption your strategy makes?

Ash Maurya in Scaling Lean: Mastering The Key Metrics for Startup Growth uses a content strategy as an example.

For instance, if you wanted to test a new content marketing strategy, what would you do? Here’s a possible task list:
1. Pick a name for your blog.
2. Register a domain.
3. Design a logo.
4. Set up a WordPress site.
5. Publish your first blog post.
6. Promote the blog post.
The first four items on this list require acquisition of additional resources. While relatively inexpensive in money terms, they cost time, which is more valuable than money. More important, they don’t do much to test the riskiest assumption in this strategy, which is 'Can you write compelling content that engages your audience?' This is tested only in steps 5 and 6.
Do you even need your own blog to do this? You can instead leverage other people’s networks by guest blogging first. Not only does it get you to step 5 faster, it also takes care of step 6.

This experiment is faster and easier than going through all 6 steps, and most importantly, if you identify and track the right lean analytics metrics, it tests for the right leading indicators early – low-level signals that tell whether your core strategy is likely to work, or not.

Ash Maurya presents a simple formula for validating a strategy, which I’ll paraphrase: “How can I test [core idea of strategy] without doing the things that require the most investment of time/money?”

  • Identify your core idea
  • What do you want your outcome to be? (Make it measurable)
  • What action do you believe will lead to that outcome?
  • How long should that outcome take?

As Maurya puts it: “[Specific testable action] will drive [Expected measurable outcome]” – within a given, realistic time period.

Putting a ‘due-by’ date on your test is a very important step many are tempted to miss, but it keeps your experiment from turning into a time & money pit.

Then, get the team together for a brainstorming session on how to test your core idea with the least possible effort and what leading indicators you can use for early signs of success (or failure).

“Formalized testing can provide a level of understanding about what really works that puts more intuitive approaches to shame.” – Thomas H. Davenport, Harvard Business Review

It’s validated learning, essentially – putting in the minimum amount of effort to validate a basic idea before investing heavily in it. But whereas validated learning is typically used for product development, the same process can be used to inform any strategy.

Little Tests for Big Strategies: A SaaS Example from Ash Maurya

After reading Ash Maurya’s book, figuring out how to develop an experiment that tests the right thing, with the least amount of effort, remained a daunting prospect. Content strategy is one thing, but how would this work with a SaaS product? The author was kind enough to break it down further via email:

“Let’s consider another example. Say you have a web application and are considering launching a native mobile version of your service. The traditional approach would be to build out the mobile version, launch it, and then measure how your users engage with the app. This process could easily take weeks and, more realistically, months.
What if you don’t get enough people using the app after you launch, or you discover that the use cases for mobile are completely different from web? You’d have to go back to the drawing board and start over.
The key insight with launching anything new is realizing that the first battle isn’t getting to usage, but validating sufficient motivation or interest. In this example, you don’t need to build out an app to measure interest. You could start by sending out an email campaign to your existing users announcing your intent to build a mobile app and direct them to a landing page.
On that page, your users would see a preview of the app (a set of screenshots) and would be able to express their interest by voting thumbs up or down. This experiment could be built and launched in a few days and within a couple weeks give you a strong sense on whether to move forward or not. Further experiments could be built to follow-up with those users who expressed interest and continually show them more aspects of the app until launch.
By building a continuous feedback loop with your users, you ensure that you don’t go astray and that you stay focused on only building what works and ignoring the rest.”

Once you validate motivation or interest through your small test, your job isn’t over just yet. There’s more work to do.

Running the Test isn’t the End of the Test

Creating just the right test is, likely, the most time-consuming part of the process, but it’s not the end of the process. After you run your test, you have to analyze the results. Did it work? Great! Your hypothesis has been validated. It might be time to implement your big strategy in all its glory, or, you may want to discuss ways to optimize it for even better results and perform another test.

Did it fail? Failed tests are just as useful as successful ones. Now, your first job becomes to ask “Why?” (maybe even ask it five times), to get to the root cause of why your assumption wasn’t true.

One More Big Benefit to Testing Small: Lean Analytics

By testing small and testing often, you’ll not only avoid investing heavily in strategies that don’t work, you’ll gather so much more data – data you can use to develop new strategies, even new products. But, that data is only useful if you can easily find and decipher it. So, while you’re testing small, be sure to save your results where they are easily accessible by your team. This is where Lean Analytics come into play.

Going Lean is, at its core, about getting the most value. And, you can only get the most value out of your data when you can track it easily, share it and collaborate on it. Make sure you've got the right tools in place to make collecting, visualizing and collaborating on your data as easy as possible.

Read more about testing big strategies through small experiments in Ash Maurya’s book, Scaling Lean: Mastering The Key Metrics for Startup Growth.

This article is a guest post from Nichole Elizabeth DeMeré, SaaS Consultant & Customer Success Evangelist.

Level up your team's impact on company growth. 

Learn the AARRRT KPIs of Product Metrics and become a PM Pirate. 

Download the guide now.