This article is part of a series called “The Data-Driven Product Manager,” interviews with product experts to help you use data to improve your product.

Laura Klein is the author of UX for Lean Startups and blogs at Users Know, a resource for product teams who want to effectively “close the loop” with their customers. She has been an engineer, UX designer, and product manager in Silicon Valley for 20 years and has helped many product teams develop relationships with their customers so they can build better products faster.
I recently had a chance to ask Laura some questions about how she uses data in her practice and what PMs might learn from collecting data from users. Notion, where I’m a product manager, is a tool to help product people track and understand the data they collect from their team and customers. We were frustrated by how hard it was to collect all the data and make sense of it, so we are building a tool to help product managers make better decisions.
First, I asked Laura how she first encountered using data as a source of information in product development.
I’ve always used qualitative research and data to make design and product decisions, but my first real use of quantitative data was at IMVU in 2007. It was the first place where I could make a design change, ship it immediately to users in an a/b test, and get insight into the real behavior change that happened. It was absolutely magical. Closing that feedback loop made future product decisions so much better.
Qualitative vs. Quantitative
I wondered how Laura, as an expert in user research, used data in the research process.
Quantitative data is fantastic at telling you what is happening with your product. It gives you very specific information about the actual behavior of users. Qualitative user research, on the other hand, gives you insight into why those things are happening.
For example, let’s say that you’re looking at your onboarding analytics, and you see that 70% of people are falling out of your funnel on a specific step. That’s really important to know, because you’re losing a big percentage of potential users. Now you have to understand why those users are abandoning your product at that step. You can do that by talking to people who have abandoned your product. You can also do it by observing potential users going through your onboarding process and seeing where they get confused or hung up.
This sort of qualitative research will give you a tremendous amount of insight into the reasons behind your data. It’s only one way in which you can integrate qualitative and quantitative research, but it’s an incredibly powerful tool.
All of us at Notion encountered challenges in our past companies around tracking and managing data. We were trying to get data from different places and software tools, and didn’t always know what we should be looking for.
How can keeping track of data be useful?
I asked Laura what the biggest win she and the companies she’s worked with have seen tracking data.
You don’t get wins from tracking data. You get wins from understanding why users are behaving a specific way and then making good hypotheses about how to change that behavior positively.
Tracking the data, in general, only tells you whether you were right or wrong. There are almost certainly some examples of finding patterns in very large data sets and drawing conclusions directly from those — for example, understanding that people who buy ‘x’ are much more likely to buy ‘y’ — but I haven’t worked with companies with that type of scale.
In addition to the benefits of understanding your users through data, Laura has written extensively on mistakes you can make. Check out her articles 5 Big Mistakes People Make When Analyzing User Data and 7 user research myths and mistakes to learn more.
The tools to be Lean
At Notion, we’ve been influenced by methodologies like Lean and Agile, and Laura has extensively worked to bring Lean principles to the design world. I asked her how she thought different product development philosophies, like Lean or Agile, relate to data collection and implementation.
Data collection and implementation are tactics or tools. Lean and Agile are methodologies. It’s easier to implement Lean Startup well if you have tools in place to measure outcomes, and that means doing things like measuring user funnels and conducting a/b tests and generally collecting data that will help you learn.
If you think about Lean Startup as continually learning and improving, that’s obviously much easier to do when you are measuring things like the impact a new feature has on user behavior. Quantitative data analysis is a large part of how we implement the “measure” part of the “build-measure-learn” loop.
For many teams, getting started can be the hardest part of adopting a data practice. It can be difficult to know what to track, to access siloed data, and how to connect the dots. I asked Laura how teams that don’t have a practice around using data get started. What would be the most useful things to track?
Start with something small, implementable, and useful. Don’t start by collecting all data in the world. Identify something you need to learn, and understand what decisions you might make differently if you had more insight. Then figure out a way to collect the data. Then keep doing that. Apply the build-measure-learn loop to your own processes.
Size matters.
We are interested in how data like team performance or user behaviour impact product development, which we at Notion call “Little Data.” I was curious how Laura drew a distinction between “Big Data” and this “Little Data,” i.e. internal data for learning about your process.
Most companies don’t have anywhere near the data volume needed to identify patterns and trends.
I don’t tend to get into semantic debates about Big and Little data, but here are a couple of different ways of looking at things. There are some companies that get to a large enough scale that they can start to look at their data for patterns and trends. Amazon and Target have so much data on purchasing habits that they can begin to form hypotheses from that data — for example, really understanding the kinds of impact that tiny price changes might have on certain types of people or looking at how very small fluctuations in page loading speed affect revenue.
Most companies don’t have anywhere near that kind of volume, especially startups. Many don’t even have enough users or volume to run decent a/b tests, which makes too big a reliance on data a little chancy. For a mid-sized startup, they might have enough user volume to run an a/b test and understand the behavior impact of a change to their product design or the addition of a feature, but they’re unlikely to have the sort of volume to mine that data for bigger patterns. There will simply be too much variation and too small of a sample size. This sort of testing is still incredibly valuable, but it will answer different questions.
Thanks again to Laura for taking the time to talk with us. We’re excited to attend her session at Mind the Product conference in May, and hope to see you there as well. Drop me a line if you plan to attend so we can say “hi”. You can also learn much more about using data for product development in our School of Little Data, a free email course we created to help you get started.

