Case Study

The Multiplier Method

One project. 20 weeks. Same team. A 4–20x output multiplier — depending on the workflow. Here's the method, and why it compounds.

Parallel Execution

Most teams work linearly: one developer, one task, one branch, one problem at a time. The method flips that — parallel tracks running simultaneously, with concurrent AI sessions across worktrees and instant context-switching the moment something blocks.

Expand scope, iterate on an issue with a coworker, and depending on the size — implement it. The team owns the code. The PO is in the loop and part of the iteration. The team owns the technical implementation, the PO checks validity, and depending on the setup QA gets a documented checklist of what was done and how to validate it.

This is why the same AI tools produce a 4–20x multiplier with this method and 1.3x without it. The tools don't create the multiplier. The workflow does — and it's a workflow your team can learn.

Iteration at Three Scales

Agile without the ceremony. The multiplier isn't speed. It's iteration density.

BUILD ship it EVALUATE what's wrong? TIGHTEN cut the fat REPEAT faster now ITERATE density > speed

Hours

Build 7 variations, compare live. 15+ polish commits in minutes. Design by building, not by speccing.

Days

Implement, learn it's wrong, throw it away, rebuild better. Two implementations beat one long planning session.

Weeks

Phases that inform each other. You only see what's wrong once it's working. Archive, restructure, ship again.

Clean Code Compounds

In 20 weeks, this method removed 319,000 lines of code. Lines of code are a meaningless metric on their own — but the scale shows what happens when removing tech debt stops competing with feature work and starts happening alongside it.

Each cleanup makes subsequent work cheaper, and AI amplifies the effect. A consistent codebase means the AI follows patterns exactly, producing fewer corrections and higher-quality output. Messy code equals lower AI accuracy. Clean code equals AI matching conventions.

Invest 2–3 days in cleanup → harvest weeks of acceleration. The flywheel spins: cleanup → AI works better → more output → more capacity for cleanup.

AI Writes Working Code, Not Maintainable Code

The core insight: AI produces excellent MVPs and one-off work. But it optimizes for the current check. The human optimizes for all future checks.

If a test is flaky, an AI will add a 30-second delay and a retry loop. It passes. But a human fixes — or has the AI fix — the race condition and makes it deterministic. AI writes a generic dictionary; a human makes it use a strongly-typed response. AI is like a senior developer making junior choices — it's up to us to guide it.

Without human intervention, each session builds on the last session's shortcuts, and the codebase drifts into slop. With it, each session builds on a codebase that's more explicit than before. The goal isn't "set up AI for your team." It's teaching your team to get compounding returns from AI instead of compounding debt.

The Storage Unlock

A new implementation ran into an issue: files weren't smoothly available to an external service because of internal encryption that was no longer needed or wanted. To solve it, we fixed three problems that had each stalled for months:

The new story drove the need to accelerate these. Tracing them to the same root cause unlocked all three: resurrect the old decrypt-storage story, build a fallback pattern so encrypted and unencrypted coexist, which gives read-only blob access to the test environment — and allows delivery to production straight away. The download-decrypt-upload task gets validated in a separate unit of work. Three blockers resolved in one day.

That's what the multiplier looks like: not typing faster, but connecting the dots across silos and acting on them immediately. All while keeping the stories up to date with implementation details and choices made — fast iteration, transparent and understandable for the team.

Teaching It, Not Just Doing It

The method transfers. But you can't just hand people the tools. Closing the gap between 1.3x and the full multiplier takes three things happening at once:

Make the code speak for itself

Every strongly typed response and every enum replacing a string removes a judgment call. The agent can't produce inconsistent output if the consistent pattern is the only option.

Get it out of people's heads

Stories become living documents. AI-readable project configs codify knowledge that otherwise lives in one person's head.

Show, don't tell

Live pairing sessions where the team watches the parallel workflow in action. Documentation says what. Live observation shows when and why.

Let's talk about your team