Menu
Menu
Blog: Why the Waterfall "Failed"?

Blog: Why the Waterfall "Failed"?

May be it's not a waterfall that failed, but our understanding of it as Dr. Kruchten pointed out (seehttp://www.ibm.com/developerworks/rational/library/4626.html). However, waterfall clearly has its limitations. I tend to think that we tried to use it beyond its boundaries of applicability and now blame the process while we really need to look in the mirror. Let's consider the topic a bit closer.

There are a great many tasks that are based on rigid and stable algorithms. It is possible - and natural - to come up with "correct" upfront requirements and design for these tasks. This means that it is possible to apply the serialized software development process that we used to call "waterfall". The last statement has at least two corollaries. Firstly, linear / serialized development methods are not dead. And what's more, they should be used for the said subset of tasks. Let's call them deterministic tasks. Secondly, if we cannot "do it right" from the first attempt for the "right" type of tasks, this says something about our abilities to do our job and nothing else.

Most business applications belong to a different subset of tasks, however. Although each program is based on an algorithm anyway, these business tasks cannot be reduced to stable, once-and-forever described algorithms. We can call these tasks partially-deterministic.

I have been involved in developing business applications since the mid-70s. Never ever during my entire career has a client (a user, customer, stakeholder, etc.) known what he wants. Even less likely was that could he distinguish between "wants" and "needs", let alone to envision what he will need tomorrow.

Why is this? Are they all stupid? Or may be they are just evil and like to torture us, poor software developers? I tend to say that the massive scale of this phenomenon calls for a quest for less extravagant reasons. We may even suggest that we are dealing with the nature of a beast. Here are two "less extravagant" reasons that I like the most. The first one is more or less obvious: they (clients) deal with a very volatile reality, which changes on them every day. Indeed, market conditions, human behaviours, acts of competitors and governments, social processes cannot be reduced to stable algorithms.

I think the second reason is not so obvious. Here it is: not only does the volatile reality influence the requirements and hence our systems, but other way around is true too. Like in O. Henry's famous novel, the systems we develop influence the reality! This is a very important point that we do not always remember. The extent of our influence on reality is yet to be calculated. Future scientists may come up with an indicator similar to Planck's constant to define a minimal system change beyond which we influence the reality that we are trying to automate.

So, certain tasks call for a certain process. If the above two thoughts are correct, then it's a nail in the coffin of the linear process' applicability to non-algorithmic tasks. Similarly, however, using an iterative process for a highly algorithmic and well-defined task may not be the best approach either.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Error: Please check your email address.

More about IBM Australia

Show Comments

Market Place