• blindman (unregistered) in reply to nobody
    nobody:
    blindman:
    nobody:
    blindman:
    "No one could have predicted that the software would have changed in the manner that it did." Bull$h1t. Scalability should be anticipated in every application. Only a complete noob would assume they fully understand the clients requirements, or the even the client fully understands their requirements. Duh.
    Uh, scalability is an architectural quality describing the efficiency and limit to which a system can take advantage of additional resources (computational, storage, network, etc.) to handle greater or lesser workloads.
    Uh, as a database designer, I can tell you that scalability absolutely includes the ability to expand upon functionality without having to scrap the current design. If your definition does not include this, that speaks volumes about the quality of the applications you develop.
    What? It is not my definition - it is the accepted INDUSTRY definition. What you describe is generally known as flexibility, extensibility or maintainability - all important architectural qualities as well. In fact, these are at times MORE IMPORTANT than extensive scalability.

    Perhaps you should become more familiar with the definition of industry terms. http://en.wikipedia.org/wiki/Scalability http://www.pcmag.com/encyclopedia_term/0,2542,t=scalable&i=50835,00.asp http://datageekgal.blogspot.com/2007/04/10g-performance-and-tuning-guide-first.html (the latter thrown in for you database guys)

    I suppose one can think of scaling of the problem space (the ability to handle additional requirements) but that is an unusual usage of the term. Not wrong I suppose, just unusual. The problem with the unusual usage is that it is likely to create confusion among others as they will (most likely) be thinking about workload handling and not requirements handling.

    But perhaps you work in a community where that definition is common. I am not familiar with any such community.

    Then you should really get out more. Database designers frequently speak of making a database scalable to multiple sites, multiple clients, multiple products, multiple forms, etc... When a a client comes to me and says they have only one location, I have the foresight to ask about the possibility that at some point in the future they may have more than one location, and design the application to handle that. If you do not, then your limited definition of scalability is doing your clients a disservice.

  • BigDigDug (unregistered) in reply to wgc
    wgc:
    I used to work in a 35 story building that was literally feet away from the old elevated highway: maybe room for a sidewalk between them. How do you enlarge that road? How do you deal with all the cross-roads and connections? One of the biggest problems with the old road was too many old ramps with no room for merge lanes or exit lanes: where can you put those? Another issue with the old road was too sharp of bends, with buildings in the way: how do you straighten those? Building vertically gives more room for extra lanes, ramps, and cross-streets. Building underground gives room for those plus straightening the road without moving buildings. Which would have been more expensive: building underground or moving buildings?
    Planning ahead in the first place, before the buildings were allowed to be built? There's no conceivable way that it should be cheaper completely replace a highway with a tunnel than to expand a highway unless the city planners were incompetent at a level that is truly worse than failure.

    Not to mention that there's always eminent domain, which would probably have been cheaper than tunneling. Tear down the buildings that should never have been allowed to be created in the first place and do it RIGHT this time.

    Besides, what's going to happen when you need to expand the tunnel? Or is Boston's population shrinking?

    Boston has created yet another scenario where it's impossible to expand, failing to learn the "scalability" requirement that caused the entire mess in the first place.

  • nobody (unregistered) in reply to blindman
    blindman:
    Then you should really get out more. Database designers frequently speak of making a database scalable to multiple sites, multiple clients, multiple products, multiple forms, etc... When a a client comes to me and says they have only one location, I have the foresight to ask about the possibility that at some point in the future they may have more than one location, and design the application to handle that. If you do not, then your limited definition of scalability is doing your clients a disservice.
    I can imagine multiple locations having an impact on scalability requirements (industry standard definition), response time requirements (added network delays) and a host of possible functional requirements. Duh.

    Anyone who does not thoroughly elicit and analyze requirements (functional and architectural) and likely change cases will tend to end up designed into a corner. Sure, you can refactor your way out of some corners, but architectural limitations typically don't lend themselves to that. You can architect a system to flex in ways that allow you to cover reasonable changing or "found" requirements - in fact doing otherwise is a ticket to failure.

    I find it amusing that you feel the need to imply that I don't know what I am doing. I do.

  • Some Atheist (unregistered)

    Enough with the sermons.

    Even if you define "WTF" as Worse Than Failure, this wasn't even a WTF, it was just a simple F. Since you can't seem to distinguish between a WTF and a F then I'll do you a favor and give you a free U, as in:

    F. U.

    CAPTCHA: ewww (my thoughts exactly)

  • (cs) in reply to blindman
    blindman:
    Then you should really get out more. Database designers frequently speak of making a database scalable to multiple sites, multiple clients, multiple products, multiple forms, etc... When a a client comes to me and says they have only one location, I have the foresight to ask about the possibility that at some point in the future they may have more than one location, and design the application to handle that. If you do not, then your limited definition of scalability is doing your clients a disservice.
    Only if the sole adverb that you recognise is "scalability."

    Multiple sites: scalability. Multiple clients: scalability. Multiple products: I don't know what the heck this means, but in all probability extensibility, not scalability. Multiple forms: extensibility. "etc": unknowable at this time.

    I suppose the last three would (in a very minor way) fall partially under "scalability" if you have to juggle around with extents and the like, but I thought that Oracle did all that for you, these days .

    There's enough namespace pollution in computer terminology already, without needlessly adding to it and obfuscating a perfectly useful and well-understood term.

    BigDigDug:
    Planning ahead in the first place, before the buildings were allowed to be built? There's no conceivable way that it should be cheaper completely replace a highway with a tunnel than to expand a highway unless the city planners were incompetent at a level that is truly worse than failure.

    Not to mention that there's always eminent domain, which would probably have been cheaper than tunneling. Tear down the buildings that should never have been allowed to be created in the first place and do it RIGHT this time.

    Besides, what's going to happen when you need to expand the tunnel? Or is Boston's population shrinking?

    Boston has created yet another scenario where it's impossible to expand, failing to learn the "scalability" requirement that caused the entire mess in the first place.

    I think the obvious solution would have been to hold off building Boston until around 1950, when the little local difficulty with motorised traffic started to become apparent.

    Damn those eighteenth century town planners ...

    The exercise of eminent domain in the case of Faneuil Hall and the like might prove a mite contentious outside the rather narrow viewpoint of the Al Qaeda school of civic rectitude.

  • nobody (unregistered) in reply to Some Atheist
    Some Atheist:
    Enough with the sermons.

    Even if you define "WTF" as Worse Than Failure, this wasn't even a WTF, it was just a simple F. Since you can't seem to distinguish between a WTF and a F then I'll do you a favor and give you a free U, as in:

    F. U.

    Quite the intellect, aren't we?
  • (cs)
    Alex Papadimoulis:
    "Avoid premature generalization," Haack advises. "Don't build the system to predict every change. Make it resilient to change."

    As for knowing when to generalize, Haack lives by the rule of three: "The first time you notice something that might repeat, don't generalize it. The second time the situation occurs, develop in a similar fashion -- possibly even copy/paste -- but don't generalize yet. On the third time, look to generalize the approach."

    In the case of Haack's Web application, this approach would have motivated the team to begin generalizing the working code with each new iteration. Over time, the changes might have prevented "ancient" parts of the code from becoming an unmanageable burden.

    ...

    Business analysts and software developers must be trained to understand that the software they build can and will change in ways they'll never be able to predict. Therefore, tools such as refactoring and design patterns should be an integral part of the development process.

    Testing and quality assurance also need to be resilient to change. When the software changes, so should the unit tests, smoke tests and regression tests. Without adequate testing early and throughout, defects from the constant changes will pile on and become even more difficult to resolve.

    Isn't this just what agile methods are all about, making it easier to adapt to change? A few months ago Alex was bashing agile (http://worsethanfailure.com/Articles/The-Great-Pyramid-of-Agile.aspx), but has he now changed his opinion?

    As for myself, I've experienced TDD and related methods to be good for producing high quality code. Some of the other agile practices, especially how the requirements are gathered from the users by gatherin user stories which are right away written as code, I'm slightly doubtful about. Not because they would not work, but because I know better methods (which also happen to be iterative and test-driven) for designing systems which do what the user needs (not what the users says he would like to have) and which produce user interfaces with high utility, efficiency and learnability.

    Addendum (2007-10-11 17:03): PS: I think it would be good to have a new category for "serious articles" (such as this and The Mythical Business Layer and the older ones), so that it would be easy to find them afterwards from amongst all the "traditional WTF articles".

  • (cs)

    We all know that building designs are not changed lightly: to change a building after the foundations are laid is to invite expense and risk failure. We all know software designs are changed lightly, causing expense and failure. Time to stop pretending that large codebases can change; If a customer provides a new set of requirements, that equals a new product. New code, from scratch.

    This way, the customer gets the real price, up front. Frequent change after the project has started have transparent costs. Changed requirements are still possible, but to do so requires a new contract. New contract means a new delivery date and a new price. The old code is thrown away.

    It's the sort of thing which would need serious laws to enforce. We need the respect and responsibilities of architects. (insert random real profession with professional liability and enforced accreditation).

    However: we current operate like the less respectable carnival sideshow operators. And some have to take orders from clowns.

    Upside of scary grown-up system: we could deliver working products. And feel good about ourselves again.

  • Nano (unregistered)

    The problem: The team decided to keep it simple: no database, no special configuration and no extensibility.

    The solution: Perhaps add a database, special configurations and extensibility? No! Instead, use Haacks Rule of Three: just hack in kludges, preferably with a straight copy/paste until the architecture starts groaning under the weight then undo the last few months work for a complete rewrite just before it collapses. Can't beat the Microsoft Way.

  • aubin (unregistered) in reply to Chrordata

    Amen to that! My friends and I have been advocating architect/engineer-like licensing for software professionals for years now - and we're only ~25 - because we know we'd be able to get our licenses, and the people who turn this industry into a joke (in college we had professors who wrote worse code than we did) would have to find something else to do. just please, please if this ever happens, make it required for the "PHB" to have his license, too...

    captcha: sanitarium - see, even the captcha bot wants us to clean up the software landscape!

  • James (unregistered)

    Reading all of this is pretty funny to me. This project failed because of massive supplies of idiocy.

    Now, the notion that people without formal training cannot do projects like this ... oh, that got me laughing. The largest idiots I've known in my 30+ years of development have had college degrees.

    This project and most others that fail miserably like it fail primarily because these companies fail to do one important thing: hire and foster high-quality talent.

    Another huge issue is the failure to break such massive projects down into managable subsystems. It's now en vogue, but I've done it for 20 years. It works great.

    Scalability: always plan on it. There's always some joker. Just plan on it.

    And ... please, have some of these college-educated moronic managers read The Mythical Man-Month. You can't just toss more consultants at a project and get it on time. In fact, that's often the reverse.

    A while back I was on a project that got bloated and horrible. Management kept adding people. Finally, management said that they were going to trash the project after a couple years of development.

    I went to the head boss and said told him that if we kept the core and talented folks, canned the rest, and started over ... that we could be up-and-running within 60 days. He took the risk, cut the team from 20 down to 4, and kicked serious code ass.

    Documentation: we did architectural documentation and used automatic systems (such as those now found in VS.NET) to manage the documentation of functions, classes and modules. Much of the code was self-documenting as we were brutal with each other about naming ... if something wasn't clear, we made the developer responsible change it.

    So, let's put the blame for this large failures where it belongs: with the idiots who shouldn't be hired for this work in the first place.

  • dotnetgeek (unregistered)

    Is there a larger image for "The code to ruin". I want to print a poster and put it in my cube.

  • sweavo (unregistered) in reply to JohnFromTroy
    JohnFromTroy:
    Excellent essay.

    The rule-o-three thing is worthy of coder's scripture.

    I concur. I am close to having screwed some of my deadlines by generalising too early, spending time on a wonderful infrastructure that is at present scarcely used.

  • fa (unregistered) in reply to Sin Tax
    Sin Tax:
    Expensive failed projects - that's not a real WTF.

    No, the real WTF is a project like the one I'm attached to. A public, underfunded customer who can't really afford the solution necessary or desired. Eternal fighting over price, deliveries, eternal dissssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssscussion about whether to upgrade the obsolete and out-of-support J2EE/Portal product it is built on, a production environment that was put together by half-clueless developers (who were particularly clueless about building a production environment for high availability and maintainability), which is now so fragile, that the appservers need to be kicked every night. Management trying to fix problems by throwing more hardware or more people at them.

    The REAL WTF? This system is intended to be a core element of a national strategy. It is the poster case for how to do such a thing right. It has won countless awards, nationally and internationally. It actually manages to be marginally useful as well, yet everyone technical who has been clued a bit about its internals will agree that it's rotten to the core.

    Capthcha: Paint. Yeah, people don't care about quality. They care about shiny color.

    Sin Tax

  • Dave (unregistered) in reply to Richard Sargent
    Richard Sargent:
    As for knowing when to generalize, Haack lives by the rule of three: "The first time you notice something that might repeat, don't generalize it. The second time the situation occurs, develop in a similar fashion -- possibly even copy/paste -- but don't generalize yet. On the third time, look to generalize the approach."

    P.J.Plauger published an article many years ago. I think it was in Software Development magazine, sometime in 1989. The article was about what he called the "0, 1, infinity" rule.

    In essence, once the pattern goes multiple, it is time to design accordingly. Switch from the singular pattern into a multiple pattern and be ready for all future enhancements of the same nature. They will come. Build it. :-)

    That's a very different version of the 0, 1, Infinity rule than I've seen before.

    The version I'm familiar with says that in a system, for any particular thing, you should allow either no instances (that is, it is prohibited), exactly one instance (that is, an exception to the prohibition), or infinite examples (or, at least, limited only by system resources).

    The purpose to this is to avoid placing arbitrary limits on things that actually are just arbitrary. You wouldn't want to have, for instance, a mail client that placed a purely arbitrary limit on the length of folder names, or the number of folders, or the depth of folder nesting, because it would be frustrating under some circumstances. Thus, you should avoid writing software that does that kind of thing.

  • Alex (unregistered) in reply to Scott

    [quote] I don't understand what you are trying to say. When an agilist says that the docs will become outdated, this means with respect to how the application actually works (the code). How can the code be outdated with respect to the code?[/quote][/quote]

    This has already been answered, by I'd like to add my 2 cents.

    Actually, it's pretty easy for code to get outdated. Of course its not outdated with respect to the code, but it becomes easily outdated with respect to the reasons for the code being there. Seen that a lot.

    E.g., code circumvents some obscure bug in library X, but is 10x slower than it should be. Obscure bug is fixed, code never gets removed/switched back to the original code. You need to document things like that! Filing a bug in your bug tracker for it, could be a way of doing that. I would even go as far as filing that bug and putting a comment in the code, that points to it. Maybe even link that bug to the bug report you created upstream. Then again, I've seen programmers, which can't even read a comment two lines above the line of code they are reading and adding a comment, asking about the reasons for doing something in the code, that were explained just two lines above ... go figure.

Leave a comment on “Avoiding Development Disasters”

Log In or post as a guest

Replying to comment #:

« Return to Article