The casebook

Decision Teardowns

Applied breakdowns of reasoning. For each decision we name what was assumed, what was missed, and what would have made the call less wrong — graded by the decision, not the outcome.

  • Teardown 01Product Thinking9 min read

    A Product Launch Built on Unvalidated Demand

    Decision context

    A team shipped a flagship feature on the strength of qualitative interest, internal enthusiasm, and a roadmap deadline.

    What was assumed

    That stated interest in early conversations would translate into adoption, and that competitor traction implied unmet demand in their own market.

    What was missed

    No falsifiable demand test. No segment-level signal. No baseline of how many similar features had launched and quietly failed inside the same org.

    What would improve the decision

    A small, time-boxed demand test with a kill criterion defined in advance — not a launch dressed up as a learning exercise.

  • Teardown 02Decision Quality7 min read

    A Team Retrospective That Produced Actions but Not Decisions

    Decision context

    After a missed quarter, the team ran a thorough retro and walked away with a long list of action items everyone agreed on.

    What was assumed

    That alignment on actions implied alignment on the underlying diagnosis — and that more actions meant a better outcome next time.

    What was missed

    No one named the actual decision that had gone wrong, or the assumption it had rested on. The retro graded outcomes, not reasoning.

    What would improve the decision

    Separate the decision review from the action review. Reconstruct what was knowable at the time, then ask whether the call was sound given that.

  • Teardown 03AI + Judgment10 min read

    An AI Strategy Recommendation With Missing Evidence

    Decision context

    Leadership accepted a strategy memo generated with AI assistance. It read confidently, cited plausible numbers, and proposed a clear direction.

    What was assumed

    That fluent structure and confident tone reflected underlying rigor, and that the cited figures had been verified rather than generated.

    What was missed

    An evidence chain check. Several load-bearing claims had no traceable source. The recommendation rested on assertions, not evidence.

    What would improve the decision

    An explicit evaluation layer between AI output and decision: claim → source → strength. Anything unverified gets flagged, not promoted.

  • Teardown 04Frameworks8 min read

    A Feature Prioritization Choice With an Incomplete Option Set

    Decision context

    A product team chose between two well-scoped features in a planning session and committed to the one with the stronger internal champion.

    What was assumed

    That the two options on the table represented the real choice space, and that comparing them carefully was the same as choosing well.

    What was missed

    A third and fourth option — including 'do neither, instrument the assumption first' — were never written down, so they were never evaluated.

    What would improve the decision

    Require at least one additional option, including a do-nothing or learn-first alternative, before any prioritization debate begins.