Lower risk, test early to learn early

There’s a thrilling product vision that inspires you. But will it really work? Don’t step in the trap to fall in love with your product baby. You might want to find out what works as early as possible, to avoid the risk of building something nobody wants. So validate your idea early to learn early. To do so, use an approach which is common in science: Phase out clearly the assumptions behind your ideas of what could be, set up and run clearly defined experiments – let’s call them “tests” – to validate your assumptions continuously, and apply what you’ve learned iterating through your assumptions about what might work. So what does it mean, to “validate”? It simply means that you find ways to further confirm – or rebut – an assumption. Is what you thought actually nonsense? Or, rather, did you discover evidence that you might be on the right track? Be aware that validating ideas is not a proof, but a way to get closer to reality.

Start with testing ideas about customer problems

Don’t just think about validating product ideas. Minimize your risk by validating customer problems first, to try and achieve a problem<>solution fit. Only if you’ve found a lot of evidence that your solution is addressing a real and relevant pain or gain, you should start validating whether customers are willing to pay for it, striving for a product<>market fit. And finally you can test if your value proposition is able to fuel a scalable business model that generates revenue to validate the business model fit.

In this post, let’s start your validation journey by focusing on testing customer problems first. As a first step, make your assumptions explicit. What is the customer job your product might be hired for? You can use Alexander Osterwalder’s Value Proposition Canvas as a starting point to map out your ideas about the jobs your customers might hire your product for, the pains they have to fulfill these jobs, and the gains they are looking for. Prioritize jobs, pains, and gains in order of relevance for your customers. Write down your assumptions – or “hypotheses”, as scientists would call them. Alexander Osterwalder’s test card offer great help to write down exactly what you believe, and how you will measure whether it could be true. Be creative and assign metrics to your test, e.g. “if 6 of 10 people confirm that question with “yes”, we will move further with that assumption.”

Find jobs, pains and gains with qualitative user research

If you lack of ideas or need some first evidence about jobs, pains or gains to map on your customer segment, slip deeper into your customer’s shoes. Qualitative user research, common in Design Thinking, is a good starting point. You can immerse into the customer’s role, observe your customers, invite them to create something like a product box or a sketch to express their needs, or simply interview them: Interview people that belong to your customer segment draft, find the jobs your customer has to accomplish in a given context, and explore the pains and gains connected to these jobs. Ask open questions in your interviews, listen to stories. To synthesize your findings, pick the “golden nuggets” out of your notes, extract them onto sticky notes, share and cluster them with your teammates to identify common patterns. Now you’re there: You found the jobs, pains and gains that you think are worth to being validated.

Set-up a test to validate

With a test you can further validate your assumptions. In the earlier phase of customer validation you will move on with interviews, allowing you to mix hard yes/no-checks of your assumptions with a broader exploration of the customer’s problem context. The more critical and risky a hypothesis is for your product’s success, the more energy you should invest to validate it. The other way round: Do not do too much testing if your assumption isn’t business-critical. If you need more evidence for a core assumption, you might add some quantitative research to your activities, e.g. an online survey.

To prepare a test with interviews, start with 10 to 20 persons that represent your customer segment. If you do random interviews on the street, prepare a short interview of about 5 minutes. If you can invite them to your test space, run interviews of about half an hour  Prepare an interview guideline that might include both yes/no questions to validate your assumptions as well as open questions to further explore the problem context. Ash Maruya’s problem interview script is a good guideline for your preparation. To run several interviews in parallel, engage your whole team to run the interviews. Pair up to interview, assign a notetaker and an interviewer. Be aware that people don’t always do what they say. To validate if there is a real interest in your topic you can offer a unique and trackable link to each interviewee offering further information to download. If that person uses that link, you know that they are really interested into your stuff.

Prototype to trigger feedback about the why

Many product teams invest energy in building a prototype of their product even if they are not clear about their customer’s jobs. If you already designed a prototype of your solution, you could show it in your test. But since you’re still in the phase of validating problems, be aware not to ask people whether they like it, but why they would use it. Don’t fall into the trap to sell your idea, but explore your customers’ ideas about it. Listen, make them feel safe, give permission to play, allowing them to interact with your prototype, and let them talk what they think and feel – we call this the “Think-Aloud” method. Think about your prototype as a trigger to learn more about the jobs, pains, and gains your customer wants to do.

Set up a build-measure-learn cycle

Do not test just once, rather, test continuously. Set up a small test group if possible, and build up a group of testers you can invite regularly. By building up the test-measure-learn cycle, you open up a path towards continuous, empiric learning both about your product and your customers. In business we call this the lean startup cycle. It’s a good idea to link your test activities to an agile process framework, e.g. to Scrum. Instead of features, you “build” tests in your sprints. And after every sprint you benefit from new learnings on your path towards building a product that customers love.

Helpful Ressources

Please add more links that you think are relevant.