Is waterfall making a quiet comeback? (sort of)

7 min read

I’ve been thinking about this for a while, and I’m not sure if I have a clean answer, but I felt the need to put some structure on it by writing it down.

We had waterfall as the boogie man scaring any developer that was considering themselves decent at their craft. The classic criticism of waterfall was pretty simple: you’d spend months writing a detailed spec, then months implementing it, and somewhere in the middle requirements would change and you’d end up delivering something nobody wanted. Both the requirements phase and the implementation phase were expensive. And the gap between “what we specified” and “what the business actually needed” had plenty of time to grow. Oh, and developers hated writing detailed specs. That’s why we had business analysts.

Agile was a reaction to that. The core philosophy was: if requirements change anyway, don’t try to capture everything upfront. Work in short cycles, get feedback early, let the spec evolve, the architecture emerges. “Just enough, just in time” - this was the favorite phrase of the Agile movement (at least in my area). This worked partly because it reduced the cost of requirements (you didn’t need all of them at once) and partly because it created tighter feedback loops with the actual stakeholders. The implementation cost didn’t fundamentally change: writing code was still the same amount of work, but you were writing less of the wrong code. That was the real saving. Even if requirements change, you can throw away the implementation and redo it. You didn’t spend months writing code that was wrong. Just a few iterations.


Now there’s a new variable in the equation: AI agents can write code significantly faster and cheaper than before. Significantly! You’d think this would accelerate Agile practices further: faster iterations, faster feedback, even shorter cycles.

But here’s what I’m actually observing: it doesn’t quite work that way (except for LinkedIn where everyone has agents running overnight doing the job for them, with great success). AI agents, at least in their current form, don’t do well with vague or partial requirements. The “just enough, just in time” approach that works when a human developer can ask clarifying questions, make judgment calls, and course-correct based on instinct, that produces pretty mediocre results when handed to an AI. You get technically correct code that misses the point. Or you get something that looks right but has wrong assumptions baked several layers deep. The ambiguity that a skilled developer would flag in a standup becomes a silent wrong decision that you discover three sprints later.

So what you actually need, if you want AI agents to be effective, is a fairly detailed spec. Not a “here’s a rough idea, figure it out” prompt. More like a precise description of the system’s behavior, edge cases, data contracts, error conditions, high level design. The stuff that was considered heavy waterfall ceremony (and developers hated).

Which puts us in an interesting position. The cost of writing requirements is roughly the same as before (arguably higher, as AI seems to need more precision than humans). The cost of implementation dropped substantially. The tradeoff looks different now:

  • Classical waterfall: expensive requirements + expensive code = very painful when requirements change
  • Agile: cheap enough requirements + same code cost = tolerable when requirements change
  • AI agents: expensive-ish requirements + cheap code = ???

The ”???” is where I’m stuck. If code is cheap, then being wrong about requirements is less catastrophic. You can throw away the implementation and redo it. The cost of being wrong has shifted from the implementation side to the specification side. Writing a bad spec that generates bad code quickly isn’t that much better than writing good code slowly.


But maybe there’s a flip side. If code is cheap and fast, then maybe you can use it differently. Not as the artifact you’re trying to perfect, but as a way to validate requirements. Write a spec, generate an implementation, put it in front of users, learn, rewrite the spec, regenerate. The implementation becomes almost disposable. It’s a probe into whether you specified the right thing, not the thing itself.

That’s different from Agile (where you iterate on the code) and different from waterfall (where you don’t validate until the end). It’s more like using code as a validation tool for requirements rather than as the primary deliverable.

There’s a precedent worth thinking about here. TDD had a similar shape: write the test first (the validation), let it drive the implementation. For a while it was genuinely transformative. It forced clarity about what code was supposed to do before you wrote it, and it caught a class of problems early. Then in 2014 DHH declared “TDD is dead” and kicked off a fairly public debate with Martin Fowler and Kent Beck. The core complaint was that strict TDD had become its own kind of damage: people writing code to satisfy tests rather than to solve problems, mocking everything into abstraction, losing sight of whether the software actually worked for users. Testing survived and matured. TDD as religion mostly faded.

The lesson isn’t that the validation-first instinct was wrong. It’s that it became a methodology, and methodologies tend to become religion. Teams started writing tests for tests’ sake. The test became the artifact. So you needed mechanisms to verify the quality of tests. So you must do mutation testing now.

The same trap exists here. If “write a detailed spec, then generate code” becomes the new religion, you’ll eventually get teams writing specs for specs’ sake. Documents that satisfy the AI pipeline but don’t reflect what users actually need. The spec becomes the deliverable. That’s just waterfall with extra steps. And there are already a lot of tools for this spec-first approach (are they the new programming language ot the new Agile frameworks - looking at you SAFE).


I don’t have a name for this. “Validation-driven development” is a bit of a stretch, and the industry already has enough named namings. But it feels like it deserves thinking about separately, because the optimization function is different. In this model the bottleneck isn’t writing code, it’s writing good specs and having something to validate against. The feedback loops change. The roles potentially change.

What doesn’t change: requirements are still hard. Understanding what users actually need is still hard. The AI can help write the spec (and it’s genuinely useful for that), but you still need a human to decide what’s important and what’s not.

This isn’t a waterfall comeback. The sequential “write everything then implement everything” failure mode still applies. What has changed is the cost curve underneath the whole debate. Waterfall failed because both phases were expensive and sequential: you couldn’t afford to be wrong, so you tried to be right upfront, and usually weren’t. What’s emerging now is structurally different: the spec is the expensive, human-irreplaceable part, and the implementation is the cheap, fast, disposable part. The flow isn’t linear, it cycles, but the cycles are driven by spec quality, not code quality.

That’s a genuinely different optimization problem from both waterfall and Agile. The “just enough, just in time” instinct that Agile normalized needs revision. The right amount of upfront spec is probably more than Agile purists would like and less than classic waterfall required, but it’s drifting toward the more-detail end of that spectrum, at least for AI-assisted workflows. Not because waterfall was right all along, but because the cost curve changed underneath us.

There’s also a human side to this that’s easy to ignore. A lot of people got into software because they liked building things. Writing code was the craft. Writing specs was the tax you paid to get to the craft. If the new workflow flips that ratio, will people actually sustain it? There’s genuine enthusiasm right now for the spec-first approach, partly because it’s new and the AI output feels like magic. But novelty fades. The developers who were happiest writing code and least happy writing requirements documents didn’t change their personalities when AI agents arrived.

Maybe that’s the actual change: not a methodology shift, but a recalibration of where on the requirements-detail spectrum you want to be. And my answer is: more than you think.

Just don’t make it a religion.