The MVP Myth: Why Minimum Viable Product Usually Isn't
The minimum viable product is one of the most useful concepts in the history of product development and one of the most consistently misapplied. In its original framing, the MVP is the smallest possible thing that can generate real learning from real users — not a prototype, not a demo, not a landing page with a waitlist, but something with enough function that a real person would use it for a real purpose and produce real behavioral data as a result. The concept is rigorous, empirical, and demanding. What it became in practice is a permission slip to ship things that don’t work.
The corruption of MVP happened through two simultaneous distortions. The first was scope expansion: founders discovered that “minimum viable” could be stretched to justify almost any level of completeness by arguing that additional features were necessary for viability. The second was the substitution of marketing artifacts for product artifacts — the landing page MVP, the explainer video MVP, the coming-soon page with an email field — which measure interest in a concept rather than willingness to use and pay for a product. Both distortions preserve the vocabulary of validation while discarding its substance.
A real minimum viable product is often uncomfortable to ship. It does less than you want it to do. It looks rougher than you want it to look. It handles edge cases poorly and requires workarounds that you intend to eliminate in the next version. The discomfort is the point — it means the scope is actually minimum. Anything that feels finished before it’s in front of real users is probably not minimum enough to generate the kind of learning that justifies the time to build it.
For bootstrapped builders, the MVP discipline has a specific financial dimension. Every week of building before revenue is a week of operating without income, which means the definition of “viable” needs to include “generates revenue” rather than just “generates data.” The academic MVP framework was developed in an environment where funding extended the learning runway indefinitely. In a bootstrapped environment, the runway is your own financial reserves, and the definition of viability needs to be tighter and more commercially specific. You’re not looking for learning that will inform the next funding pitch. You’re looking for learning that will pay rent.
The minimal version of this, taken seriously, often means things that don’t scale. Taking orders manually before building order management. Sending emails by hand before building an automated sequence. Doing the service yourself before building the platform that automates it. The “do things that don’t scale” advice from Paul Graham is, in the bootstrapped context, not a growth hack but an MVP strategy — a way of validating the commercial model with zero infrastructure investment before spending time and money on the infrastructure.
The test for whether something is actually minimum is whether it could be smaller and still test the core hypothesis. If the answer is yes, it should be smaller. The test for whether it is actually viable is whether a real customer has given real money for it. Everything between those two data points is speculation, however sophisticated and well-reasoned. The MVP process is the mechanism for collapsing speculation into evidence as quickly and cheaply as possible.
The myth is that the MVP is a product stage. The reality is that it is an epistemological discipline — a way of being honest about what you know versus what you believe, and designing the cheapest possible experiment to close the gap. Applied correctly, it produces businesses faster. Applied as permission to ship half-finished work and call it lean, it just produces half-finished work at a vocabulary premium.