The Failure Autopsy: How to Learn from a Dead Project Without Wasting More Time on It
Most failed projects get one of two postmortems: too much or too little. The too-much version turns the failure into a narrative — a blog post, a retrospective thread, an extended personal reckoning that may be emotionally necessary but that rarely produces the specific, operational insights that would actually change future behavior. The too-little version suppresses the failure entirely, pivoting quickly toward the next thing to avoid the discomfort of analysis, and forfeits all the information the failure contained. The useful version is neither. It is short, specific, and deliberately unsentimental.
A project failure generates a finite set of information, and the autopsy’s job is to extract that information before it fades — before the emotions attached to the failure have distorted the memory into a narrative, and before the specific details that would be most instructive have been replaced by a general impression. This means the autopsy should happen quickly, within days of the decision to stop, and should follow a structure that prevents the natural tendency to either catastrophize or rationalize.
The first question is: what was the hypothesis? Every project rests on at least one belief about how the world works — that a market exists, that a problem is large enough to pay to solve, that a particular audience will respond to a particular offer. Failed projects often reveal not that the product was bad but that the hypothesis was wrong, and a wrong hypothesis is the most valuable thing a failure can produce because it is information about reality that expensive success might have obscured for years.
The second question is: when did the evidence against the hypothesis become available? This is the most uncomfortable part of the autopsy and the most instructive. Almost every project that fails at month twelve shows warning signs at month three — customer conversations that didn’t go quite right, conversion rates that were soft from the beginning, a market that was slightly smaller or less motivated than the hypothesis assumed. Identifying the moment when the negative evidence was available is the analysis that changes future behavior: you cannot change the failure, but you can change how quickly you recognize the pattern next time.
The third question is: what specifically would have to have been different for this to have worked? Not in the sense of “what should I have done” — that is a moral question that leads to blame rather than learning — but in the factual sense of what conditions would have made the outcome different. A different customer segment, a different price point, a different distribution channel, a different timing. This question separates contingent failures (which might have worked under different conditions and suggest a pivot rather than abandonment) from structural failures (which were never going to work under any realistic conditions and suggest learning something fundamental about a market or model).
The fourth and final question is the one most people skip: what does this failure tell you about how you evaluate opportunities? A failed project is data about your judgment, not just about the market. If the hypothesis was wrong, why did it seem right? What evidence were you overweighting, and what were you underweighting? The answer to this question is the one that compounds over time — the meta-learning that makes subsequent projects systematically better rather than just differently wrong.
Keep the whole exercise to two hours maximum. A failure that takes a week to process has been allowed to metastasize into something it isn’t. It’s a failed project, not a failed life, and the difference between those things requires maintaining the analytical distance that two hours of structured questioning can provide and extended rumination reliably destroys. Extract the information, update the model, and move. The next project is already waiting.