I’m not a lean startup expert. Not in theory, not in practice. I’ve read the book by Eric Ries. I’ve read articles by Steve blank. I like the buildmeasurelearn feedback loop concept. I like it because the approach borrows from the scientific method. Find a problem. Make a guess about it. Compute the consequences of the guess. Compare the consequences to experiment or observation. If it disagrees with experiment, it’s wrong. In lean startup methods, if the guess is wrong, we are told to change the guess, pivot the problem, or give up. The use of the scientific method is analogous. In reality, making and testing guesses is subjective. It’s messy. It’s open to more interpretation than proving or disproving something concrete, like a law of physics. And so it is not a guaranteed road to success. In truth, there is none. I like the lean startup approach. It seems intuitive, a very human thing to do. It’s bottom up tinkering instead of top down planning. It’s playing with lego as a kid and figuring it out by trial and error. It’s fun. It feels a natural thing to do, tinkering. And it is. We’ve been doing it for thousands of years. Take cooking. You make a dish. You taste the dish. Something’s not right, you think the dish needs salt (guess). You add salt. The dish tastes great. Adding salt worked. That’s a buildmeasurelearn feedback loop. It’s a thousands of years old idea. Or take my mum. She owns a cafe. She often observes her customers, what they buy, what they want. Based on this, she makes guesses about a product (feature!) to add or take away. She tests it for a while and compares the findings to her guess. It’s not strictly scientific. It’s subjective. It’s messy. It’s trial and error. It’s intuitive to us. What Eric Ries has done is create a framework to make this natural approach better. He’s reduced the entropy of trial and error by creating order. It works because early feedback catches errors early. This reduces the risk of errors compounding. This increases the chances of survival, which allows more chances for success. Excellent. The bit that seems most open to interpretation is the concept of “minimum success criteria.” It’s the bit least written about. It’s tricky to do. It’s almost finger in the air stuff. Which is why the concept is only analogous to the scientific method. It’s hard to validate an assumption rigorously. My validation is your falsification. If I’m right I may build a huge company. If I’m wrong errors compound and I end up losing capital. So it’s important stuff. The same goes for corporate innovation. I judge capital allocation by my next best alternative. To do that accurately, minimum success criteria needs to be welldefined. If not, inadequate return on investment is being masked. Your company will suffer. The current approach to minimum success criteria is riddled with biases. These biases affect testing outcomes. Biases like selfinterest, confirmation and sunk costs. We need to understand these in order to avoid them. Let’s take a look. There is SelfInterest. If it’s your startup, your personally invested and so biased towards a positive outcome. If you work for a startup or company, there is the PrincipleAgent Problem. You want to keep your job. It’s hard to objectively act in the best interests of the company, not your best interests as an individual. This leads to Confirmation Bias. Out of selfinterest, it’s easy to interpret information to confirm existing beliefs. It’s easy to filter out disconfirming information. Not maliciously. Subconsciously. Then there is Sunk Cost Fallacy. The more time, energy, or money we invest into something, the more we feel compelled to continue. It’s hard to admit being wrong. It’s hard to be inconsistent. It’s hard to feel like we lost something. When doing a test, you can go through these biases, checklist style. Check each bias. Tick it off. Don’t fool yourself. You are the easiest person to fool. Do your tests with someone else. Someone who didn’t make the guesses. Someone not as liable to the psychological biases. Someone who can be more objective. This will reduce errors. What else? In investing, Warren Buffet says to have a “Margin of Safety.” “Build a 15,000 pound bridge if you’re going to drive a 10,000 truck over it.” So build some redundancy (a buffer) into your minimum success criteria. Nature likes redundancy. And what about a framework for minimum success criteria? I don’t know. I’m not an expert in lean startup. I’m not an expert in maths either. But I think the Law of Total Probability and Bayes Theory is a good place to start looking. Startups often have an information asymmetry. There is a lot we don’t know we don’t know. About the problem. About the customer. About the market. It’s a field of uncertainty. Feedbackorientated trial and error (lean) works because tests carry small costs that allow you learn quickly. In this way, with a bit of luck, a successful startup can be “stumbled upon.” No more. No less. Bayes Theorem is a way to update your beliefs given new evidence. It takes a subjective belief (a priori) about something, and updates it with new information. With more information, you update the belief again. When I say belief, I mean probability. The updated probability becomes the starting probability for the next test. Like a beliefinformationbelief feedback loop. The law of total probability helps find the probability of an event A. It says you can look at the partitions of the total set, and add the amount of probability that falls into each partition. For example, after research, you guess that women who work in sales, aged 2030, who work in big cities, will buy your MVP. You split your tests in batches of 100 for each city.
Across all of your tests, the probability that someone buys your MVP is the probability of them buying in each set multiplied by the probability of the person coming from that set. In this case, it is 80/400 = 20% of all people. Bayes theorem helps us explore the information further. For example, we can ask: given someone buys, what is the chance they are from set 3 (which might be the city London). We know the chance of someone buying in general is 20%. We know that there is a 1/4 chance of someone buying from any set. We know that in set 3, 30/100 people buy. Below is the Bayes formula. Looking at the table of people buying, it become more obvious. Of the 80 people buying in total, 30 are from S3. 30/80 = 37.5%.
These 2 tools help us measure what our tests mean. They may be overkill for a lot of testing. I don’t know. I hope other people add their own experiences. What about what success looks like? At the MVP stage, I think revenue calculations are a good place to start. Does revenue exceed marginal costs? With MVP conversion rates and market size, will enough people buy to cover marginal and fixed costs to make profits. Start there. Conclusion:The lean startup method is like directed trial and error. It catches errors early, increasing the chances of finding a business model that works.
Minimum success criteria for testing is a grey area. You have psychological biases affecting your judgement that you need to tick off checklist style. Two ways to interpret test results are the law of total probability and bayes theorem. Actually setting success criteria is hard. If at the MVP stage, one way to do it is based on revenue less marginal costs. Whatever stage you are at, build in a buffer to your success criteria. 19/11/2015 09:51:36 pm
your core point of making your prior explicit and using Bayesian updating to adjust the probabilities that an experiment was successful (or if a failure or a "success" should be tried again to further verify) is a good one. I had been thinking that incorporating the Bayesian prior is what many entrepreneurs do unconsciously (unconscious competence) but there are benefits to making it explicit. Comments are closed.

Recent Posts22 Laws of Marketing #5
22 Laws of Marketing #4 22 Laws of Marketing #3 Living After The Death Of A Loved One 22 Laws of Marketing #2 22 Laws of Marketing #1 Ray Dalio Principles Summary Tools of TItans Top 5 Bits of Advice All We Have Is The Present Financial Independence  An ... Topics
All
Archives
March 2017
