Home | Login | Recent Changes | Search | All Pages | Help

WaterfallIsSilly

From AdviceOnHiring
See also WaterfallFolklore

As an aside, people might argue that the waterfall process is silly and never works. I guess this depends on how we define waterfall. In my experience, the waterfall is a simple and straightforward process. It is very efficient and many people still use it effectively. It just doesn't get much good press.

DwaynePhillips 15 August 2003


I don't think waterfall is silly and I know it works. But it works only in a limited set of circumstances, such as when the people have expertise creating these kinds of solutions and when the requirements are clear and not subject to change. At least, not much change.

People get hung up on lifecycles because they don't take the time to think about the requirements to make a lifecycle successful.

JohannaRothman 2003.08.15


My first rule for choosing methodology for projects is this:
If it will work, use a waterfall process as your first choice.
If it won't work, don't use a waterfall process as any choice.

- JerryWeinberg 2003.08.15


Because I think waterfall is silly, it probably won't work for me. (It never did work for me, even back when I didn't think it was silly.)

This may sound like I disagree with Dwayne, Johanna, and Jerry, but I suspect that's not really the case. As Dwayne notes it's all a matter of what you call "waterfall". Maybe we can probe for the outlines of our shared notion of what "waterfall" is, and even pin down the discrepancies which give rise to the apparent disagreement.

I'll start with size. Give me a small enough waterfall, and I can take that seriously.

From my point of view the defining characteristic of waterfall is that all the testing gets done at the end. This is silly if what goes into test is complex enough to contain more than one mistake, because as soon as you have two mistakes the possibility that they will interact makes testing very painful.

For most projects and most languages I can confidently type about three to four lines of code with no more than one mistake. That's the size of waterfall I could do. (I don't do waterfall - I do my testing at the beginning.)

Great requirements, analysis and design work could, I suppose, boost that figure. Perhaps I could write twenty lines with at most one mistake under such conditions. I suspect that one could get a lot more mileage out of formal methods. But I doubt that anyone, any team can write a few tens or hundred thousand lines, and not make several mistakes.

LaurentBossavit 2003.08.17


What if we test the requirements, analysis and design work before we start development? SherryHeinze 2003.08.16
To me, waterfall is simply doing one thing after another in a fixed sequence. There can be several "test" steps in that sequence, and several "repair" steps - as long as their number is fixed (or upper-bounded). Looping creates problems, so that's why I prefer to build in pieces that can be done (95+% of the time, anyways) in a fixed, sequential process. If they cannot be so done, then you have to use a looping process, but often such a process is used simply because the people didn't take the trouble to keep the reliability of each step high enough. That's usually out of a rush to code/build. - JerryWeinberg 2003.08.16

Other Discussions of waterfall:


Is what you see as a problem with a "looping" process is the exit criteria to stop looping/iterating? Or the temptation to do things over again when it isn't necessary? What is the problem?

The original Royce paper on waterfall - I'm told - recommended designing and building the software twice.... because you don't know enough to do it right the first time.

KeithRay 2003.08.16


Here is the reference:

Winston W. Royce. "Managing the Development of Large Software Systems: Concepts and Techniques", 1970 WESCON Technical Papers, v.14, Western Electronic Show and Convention, Los Angeles, Aug. 25-28, 1970; Los Angeles: WESCON, 1970, pp.A/1-1 -- A/1-9; Reprinted in Proceedings of the Ninth International Conference on Software Engineering, Pittsburgh, PA, USA, ACM Press, 1989, pp.328--338.

A link: http://facweb.cti.depaul.edu/jhuang/is553/Royce.pdf -- CharlesAdams 2003.08.16


I consider waterfall to be "do the requirements, do the design, do the implementation, do the delivery." Testing, in various forms, occurs along the way.

I consider non-waterfall to be "do some requirements, do some design, do some implementation, do some delivery" and then repeat as many times as appropriate. Non-waterfall allows for learning by everyone involved. That is helpful if the implementers don't know the subject well or if the customers don't know what they want well.

If everyone knows what they want and how to do it, waterfall is very efficient.

DwaynePhillips 17 August 2003


I believe that Waterfall as predictable schedule delivery has a range of workability. I also believe that most of our systems developed naturally tend to violate that envelope of workability.

Factors that allow waterfall scheduling to work are:

  • Requirements stability and clarity.
  • Self-contained product (because external dependencies could vary abruptly)
  • Small enough duration to deliver before obsolescence of requirements.
  • Repeat of previous project(s).

My experience in software is:

  • Systems get bigger.
  • People expect connectivity.
  • Those former small projects have been solved - they're now just a module call away. You're working on the next iteration of complexity.

Outside of the academic/training world, there's not much demand for "Program to copy cards to tape" anymore. Playground sized programs like that don't represent the essence of useful project work. I'd do something like that in a 1-2 hour exploration in some new language or new operating system.

One essence of modern software projects seems to be managing the interfaces of components, libraries, cooperating systems that all move at different paces. The compiler system, the runtime library, the toolkits, and target cooperators over the web all keep upgrading without consulting your project schedule. Delivering value with less control of essential variations requires dealing with rapid change more effectively, less on up front planning.

I also think that there are variations in our branches of software development that have differing amounts of these effects. Traditionally, embeded products were more insular, but web and WIFI and other effects are dragging the network effects into many of those formerly closed domains. The same thing happened to Data Processing / MIS / IT with first private networks and now the web. Holding requirements stable seems quantitatively more difficult now than 10-20 years ago.

BobLee 2003.08.17

Discussion removed to WaterfallFolklore

I consider waterfall to be "do the requirements, do the design, do the implementation, do the delivery." I consider non-waterfall to be "do some requirements, do some design, do some implementation, do some delivery" and then repeat as many times as appropriate.

The entire philosophy behind RUP is to consider that these are not the terms of an either-or choice, but extremes along a spectrum. The generalized model, continuous rather than discrete, is that for each project we choose a fixed mapping "how much design we are currently doing", vs. how much requirements exploration, how much implementation, how much testing, as a function of time. That's the famous "hump diagram".

That strikes me as sound as far as it goes. Let's say we exclude from the "waterfall" denomination any project which goes back to add a missing requirement or correct an erroneous one during design or implementation. Similarly going back to fix the design while in implementation phase. If that's our definition, then we've defined a process that's never (or very rarely) been applied on a significant scale, and that borders on the silly.

Even taking the RUP generalization into account, there's a paradox here.

Jerry's model is one I can take seriously. If I know the problem domain, then I can divide solving a problem in step A, then step B, then step C. Cooking feels like that; although I've taken it up recently enough that there's a lot of discovery going on, there's always a certain order things are supposed to be done. If I follow that, and get the timing right, and the proportions, I end up with an edible meal.

There's no generalized model for cooking, though. Sometimes it's "gather ingredients, mix, bake, serve". Other times you do these in a different order, omit one, do one twice or more. There are many other types of steps, too. It's more like Jerry's notion of waterfall than like Dwayne's.

Dwayne's is more like the waterfall I had in mind, with the exception of "testing whenever appropriate". What bugs me about that "standard" waterfall, the one that goes requirements - design - implementation, is where did these phases come from ? Why are they the "standard" way of doing things ?

It strikes me - that's the paradox - that the waterfall phases are a wonderful way to learn about a problem. As a generalized problem solving framework they work beautifully. First we define what the problem is, expose our assumptions about it, formulate concrete and achievable goals, etc. Right there in the "requirements" phase we learn a lot about the problem - if we do it right, sometimes we no longer have a problem by the time that phase is over. Then we move into "design", which is where we have the opportunity to think about what The System is that we are proposing to modify, explore possible unintended consequences of our proposed actions to solve the problem, and so on. Again an opportunity to re(de)fine our understanding of the situation. And then, in "implementation" we get to test our models against reality; that's where we learn whether the system responds to our interventions in the expected ways.

If we truly know the problem domain, chances are that there is a specific sequence of steps that apply. If you're designing a compiler, you can famously farm out the work to as many subteams as your compiler is expected to have passes. Only if we're tackling an unknown problem does the waterfall framework make sense, and then as a strategy for learning.

The generic techniques defined by the "classic" waterfall phases work best for discovery, but the waterfall process is said to work best in projects where there's little discovery to be done. That strikes me as perhaps a bit silly. Confusing, at any rate.

LaurentBossavit 2003.08.17


If we look at fine enough grain, every finite process will be composed of a series of waterfall efforts. We break things down (either in advance or on the fly) into steps that are small enough so that we know we can do them A, B, C... with arbitrarily small chance of some of the events that Bob Lee so aptly describes.

OTOH, if we look at a large enough grain, there are NO waterfall processes. We could, for example, look at the development of all the information processing systems in an organization and we'll see all sorts of loops and variations. Even compiler development, as mundane as Laurent knows it to be, is not waterfall if we consider the development of compilers, say, since the first FORTRAN complier in 1956 as one project.

IOW, all processes are built out of waterfall processes - sequences with no unanticipated branches - but some are built entirely of them, some are built on-the-fly, some top-down and some bottom-up. That's why it's important for everyone to understand waterfall processes (their uses and their limitations) before they embark on designing something less basic. - JerryWeinberg 2003.08.17


I think for most people who've been around for a while, waterfall is our instinctive choice, more out of comfort and apparent simplicity than for any other reason.

Having said that, as my focus these days is in managing offshore development, it's difficult to see how you could maintain the cost model and efficiency for offshore development without using the waterfall model.

Of course, if we were better at measuring the benefits then the cost model would pale into insignifance.

PhilStubbington 2003.08.18


Jerry brings out a good point. We all (or at least I) design to the point that "the sequence is obvious in our head". Enumerating design further is waste. In the larger loop, I'm using iteration while at the detail level I'm "Waterfalling" (???) in cascades. Experience allows us to ratchet-up the size of the cascades we can handle comfortably. Seeing patterns helps reduce the detail clutter.

The growth of your personal library of stored patterns and their side-effects determines how much you can sequence on auto-pilot. I would be interested in JR's manager view of pattern chunking and how that mixes with the development team's experience levels.

BobLee 2003.08.18


This weeks copy of IT Week has just landed on my desk. I quote Paul Pickup, IT strategy consultant at technology clearing house Trading Technology:-

"[Outsourcing] works where there is a waterfall model for the development process, where one step follows on from the next" he said. "But these days, the development of new applications is more likely to be concerned with integrating a collection of different systems on a network. There the developer has to be close to the users"

Apart (in my Mr Picky mode) from the fact that I'm not sure if there is any process where one step doesn't follow on from the next (or previous, surely? Unless you're a salmon swimming upstream), I'm not convinced that the geographic closeness is a must have. It's a nice to have, for sure, but not essential.

PhilStubbington 2003.08.18



Bob, Jim, you've got a lot of it right, but I was there, and there's a lot more to the story. The big emphasis of "waterfall" was that you had to have something visible (and, we emphasized, reviewable) at the end of each phase. This was a result of things growing bigger, not a result of things being small. Around the late 1960s, invisible programming (where the programmer went away and came back some time later with your product) wasn't working - for exactly the same reason it still doesn't work. At some point it becomes impossible to know the future or even understand the present.

So, Royce and many others were attempting to convert an invisible process into a visible one. That's the big lesson, and it's still totally valid today. We still have developers who want to say "trust me," take your money, and not be bothered until they give you (or don't) what they think you (should) want. Any process that doesn't have visible checkpoints (reviewable checkpoints) is going to be an unreliable process.

How far apart should these checkpoints be - that varies with the context of the customer's problem and the development organization's capabilities. There is no one right answer, as Jim points out.

How far ahead should you project the sequence, again, that depends on the same factors.

In short, no visible method with meaningful checkpoints is "silly." Calling such an approach "silly" is silly, because it doesn't take context into account. A process can be a better or worse fit for a certain context, and most organizations have at least enough different contexts to make four or five distinct approaches necessary if they want to do a good job. Of course, then they have to actually think about what approach they're going to use before they start - and that's apparently too strenuous for some people. - JerryWeinberg 2003.08.20


I am noticing two papers related to the contents of this thread. One is a pretty informed discussion of methodologies. The comments from one contribution to another tie out and tie together nicely.

The other paper is about having several methodologies in one organization. I have gotten good results recently proposing a "stack" of methodologies, one for content, another for software, and a third for platform / architecture. The content / software / platform categories work pretty well for web-ish systems, and the "stack" word helps because it's familiar to them. I don't have the distinctions and implications completely worked out yet, but it's actually in draft and out for a couple of sympathetic reviews. As usual, Jerry's formulated the fundamental model that goes with the example - there are a bunch of methodologies in any large organization.

Of course, if picking among several processes is strenuous, tuning your process to the problem at hand is going to be impossible. I wrote in IEEE Computer, in 1999 I think, that the good-as-in-consistently-successful project managers tune their processes to the problem at hand, especially when they're doing something unusual.

So if we know this, why can't we seem to tune our processes, especially in the face of feedback that what we're doing could work better? What's the sticking point? (Make that TheStickingPoint.)

- JimBullock, 2003.08.20 (It's not a methodlogy. Just do things exactly my way, damnit.)


In card days, there were people who exercised fine configuration control, and there were people who always made a mess for themselves and others.

In tape times, there were people who exercised fine configuration control, and there were people who always made a mess for themselves and others.

In disk days, there are people who exercise fine configuration control, and there are people who always make a mess for themselves and others.

Two questions:

  1. What's the common factor in good configuration control?

  2. What's the common factor in poor configuration control?
  1. (extra credit) Do you think the next new storage technology is going to guarantee good configuration control?

  2. (extra extra credit) What do you think the AYE Conference has to do with configuration control?

- JerryWeinberg 2003.08.21


For the extra extra credit question, I wrote something on that once and put it in safe storage on a CD. Let's see, that CD is here somewhere...

I think it was in the movie "The Russians are Coming, the Russians are Coming," that one of the characters kept saying, "We've just got to get organized!" Then there is an old saying, "A place for everything and everything in its place."

About 20 years ago I realized that I was not smart enough to remember everything, so I started using a set of advanced tools to store information where I could find it when I needed it. Maybe I should share those advanced tools here, or maybe just wait until my session on tools to share that.

DwaynePhillips 22 August 2003

A side note on "truck factor." My father died on the side of a road after he was hit by a truck. I don't become angry at people who say things like "truck factor" or "bus terminated project." It is hard, however, to continue listening to someone when they say those things. I am much better at doing that now than in January of 1990. But for speakers and writers, I urge caution with those things. Many people have been through such tragedies and have yet to work through the grief.


Thanks for the heads-up, Dwayne. I had a client once where their principal systems programmers was actually hit by a truck, and I also once saw a person killed by a truck. Even though neither of these people were known to me personally (I later got to know the systems programmer, who survived (barely)), I find the image disturbing to this day. (The man was killed right in front of me more than 50 years ago, and I still have bad dreams about it.)

For a while I used "pregnancy factor," but that was politically incorrect and didn't apply anyway to part of the population. I've since switched to "religious conversion factor." I've actually had two clients who lost a key person to religious conversions. One woman became a nun; one man went off to a Buddhist monastery in Tibet and was never heard from again. - JerryWeinberg 2003.08.22

P.S. I also keep a list of where all my important stuff is, but I forgot where I put the list.


In my experiences with classic, one-pass Waterfall, there's almost always been a multi-pass Spiral going on under cover of darkness. Sometimes management knows; sometimes they don't. Sometimes it's explained away as model building for the sake of requirements elicitation, testing the analysis, probing the design, etc.

With Watefall, the longer you go between phases, the more you have to invest in generating artifacts that record the result of each phase, and the greater the chance that holes, mistakes, or outright nonesense won't get smoked out soon enough to avoid rework of the type you're not officially allowed to do. With long phases, the probability that the customer will change their mind increases, and the expense of dealing with those changes increases.

I prefer Spiral, or one of the Agile methodologies, where you stay engaged with your customer, and show them small increments at frequent intervals, giving them the chance to sort out their own requirements and desires without causing mass upheaval.

I've also found that morale is higher when people feel that their efforts are going to demonstrable forward progress, rather than being sucked dry by ritual.

DaveSmith 2003.08.22


I agree with Dave. I believe this all comes back to what I term the difficulty of the product for the people involved. If the product is difficult, the customers change their mind often, etc. frequent interaction or short cycles is important. If people know what they want and have experience with the product, longer intervals are efficient.

Many people I meet at conferences work on projects where short iterations are necessary. Many people I meet in my line of work have projects where long phases are efficient. Two different sets of products and people - two different methods work best.

DwaynePhillips 23 August 2003


Discussion refactored to WaterfallFolklore (got too long to edit.)
Saw this on the Scrum mailing list... Mike Cohn writes "The only Gantt chart that makes sense looks like this:
Analysis |*************************************|

Design |*************************************|

Coding |*************************************|

Testing |*************************************|

(That�s supposed to be 4 bars all extending the full length of the project.)"

and a quote from Martin Fowler:

"I often say that the only problem with waterfalls is when they are too large. It's reasonable to kayak the Rogue River, it isn't wise to Kayak over Niagara Falls."

KeithRay 2003.09.01


See http://www.waterfall2006.com/ for evidence the pendulum is swinging back. -- GeorgeDinwiddie 2006.01.27

(( It's a subtle thing, but notice how the browser's "back" button is disabled when you click on links at that site. ;-) KeithRay oops, I guess that got changed when the Dogmatic Programmers panel was added. ))


Disabling the back button makes a site a waterfall site, which brings up another problem with waterfall-type projects (and this may be the biggest problem of all). Think of how you feel when you're trying to browse a site that tries to control the way to can navigate it so you have no real options. Well, multiply that by a thousand days and you'll understand why people start to get truly unhappy on large waterfall projects.

It's not so much that the project is going to fail, but that they don't get to feel that they have anything to do with whether it succeeds or fails. - JerryWeinberg 2006.01.30


Updated: Monday, January 30, 2006