• (cs)

    Sad. A typical example of US Government program.

    God bless America {it needs all the help it can get}

  • Anon (unregistered) in reply to ParkinT
    ParkinT:
    Sad. A typical example of US Government program.
    Sadly, it isn't: most US government programs get put into production EVEN THOUGH they work as well as VCF did.
  • (cs)

    The executive summary: RTFM!!

  • greywar (unregistered)

    its not just the us govt, its governments in general. A government doesn't feel constrained by profitability, so they feel more then willing to extend additional features no matter the cost.

    Then add in that theres little accountability....and presto! instant disaster.

  • Tim Smith (unregistered)

    LOL, it isn't just the government.

    It sort of reminds me of the old joke that there is always an asshole in every group. Look around, if you don't see one you are the asshole.

    Every sector of business has produced bad applications. If you can't find an example around you, it is probably your application that that is the piece of trash destined to suck the life blood out of every programmer that touches it.

  • (cs)

    Those failures are not specific to the U.S. I know of several similary costly failures in Europe, too.

    Anyway "nearly a million lines of code" doesn't seem very much to me. I wonder how they managed to spend 200M for that quantity of code.

  • Grumpy (unregistered)

    When did I subscribe for a 2-page essay? Too many words ...

  • Brandon (unregistered) in reply to Grumpy
    Grumpy:
    When did I subscribe for a 2-page essay? Too many words ...
    Get over it... that's the problem, "managers" who are like you get their job and are too lazy to educate themselves more on software management. This article isn't exactly new material, but it stresses the importance of having formal training and knowing when to make these things called "decisions" which, amazingly, make an impact on business.
  • gabba (unregistered)

    I'm confused as to why "REWRITE LARGE SWATHS OF CODE" is considered an essential step on the way to software success. Rewriting code is often necessary, but why is it put on the same level as fixing bugs and testing?

  • (cs) in reply to ammoQ
    ammoQ:
    Those failures are not specific to the U.S. I know of several similary costly failures in Europe, too.

    Anyway "nearly a million lines of code" doesn't seem very much to me. I wonder how they managed to spend 200M for that quantity of code.

    Duh! Applying scope creep to their requirements!

  • Robert S. Robbins (unregistered)

    I think documenting an application and its code is more important to maintenance than trying to generalize and abstract functionality. Developers just like to complicate things to the point where nobody can understand how the system works.

    There are only a few changes you should plan for like a change in the database design. I would just document where you need to update code and queries to accomodate a new column rather than attempt to make the application independent of its database design.

    I am currently working on an application that loads the entire database schema into jagged arrays so you have no idea what column or table you may be dealing with in a query.

  • (cs)

    Good essay. I'll share it with my team.

    Management: Developers were both poorly managed and micromanaged.
    Micromanagement is a subset of poor management. Been there, experienced that.
  • (cs)

    Hey isn't that picture a screenshot from Final Fantasy VII?

  • Michael (unregistered) in reply to gabba
    gabba:
    I'm confused as to why "REWRITE LARGE SWATHS OF CODE" is considered an essential step on the way to software success. Rewriting code is often necessary, but why is it put on the same level as fixing bugs and testing?
    The first writing is almost always a throw away, you should expect it and accept it, it will be significantly better the second time around. Once you build the code once, you have learned 90% of what was wrong with the design. A large rewrite at that point will pay off exponentially down the road.
  • (cs)

    So Alex is running out of submissions, eh? Nice diagram, though...

  • JohnFromTroy (unregistered)

    Excellent essay.

    The rule-o-three thing is worthy of coder's scripture.

  • Anonymous (unregistered)

    Bull and Shit.

    VCF was written by a private company, for a public customer.

    This does not "prove" in any way, any dubious axiom that "private industry always does it better than the government". The problem with VCF was more to do with lack of adequate oversight. The people writing the checks were not held responsible for the results. So they didn't make sure that the people writing the software were held responsible for the results. Sad state of affairs, but the fix to this problem is: get rid of corrupt politicians. (yes; both parties - yes, it's the campaign finance system, STILL).

  • (cs) in reply to Robert S. Robbins
    Robert S. Robbins:
    I think documenting an application and its code is more important to maintenance than trying to generalize and abstract functionality. Developers just like to complicate things to the point where nobody can understand how the system works.
    No amount of documentation can fix an unmaintainable system. A really good system needs minimal documentation. Too much documentation means more of it will get outdated sooner, and not be fixed, and more of it will be simply wrong.

    On the other hand, it's certainly possible to make a system so general and with so many layers of abstraction that it becomes unmaintainable as well. To be maintainable, a system or API needs first and foremost be be simple. Then you don't need much documentation. Abstractions that remove complexity are good (unless you need the more complex features and cannot bypass the abstraction). Abstractions that add complexity are bad.

  • Richard Sargent (unregistered)
    As for knowing when to generalize, Haack lives by the rule of three: "The first time you notice something that might repeat, don't generalize it. The second time the situation occurs, develop in a similar fashion -- possibly even copy/paste -- but don't generalize yet. On the third time, look to generalize the approach."

    P.J.Plauger published an article many years ago. I think it was in Software Development magazine, sometime in 1989. The article was about what he called the "0, 1, infinity" rule.

    In essence, once the pattern goes multiple, it is time to design accordingly. Switch from the singular pattern into a multiple pattern and be ready for all future enhancements of the same nature. They will come. Build it. :-)

  • nickinuse (unregistered)

    Makes me wonder who could possibly write the system for Counter Terrorist Unit in 24 Hours ;)

    CAPTCHA: smile, yeah I did.

  • different anon (unregistered) in reply to Anonymous
    Anonymous:
    Bull and Shit.

    VCF was written by a private company, for a public customer.

    This does not "prove" in any way, any dubious axiom that "private industry always does it better than the government".

    Did you even read the essay?

    That axiom has nothing to do with the issue at hand. The essay was about terrible management and its results, not on public vs. private industry.

  • (cs)

    My favorite was the Receive Requests for New Feature --> Bugger off! --> Visit from Upper Management --> Project Outsourced to India

  • ICanHaveProperGrammar (unregistered)

    It's definitely not just government. I work for MegaCorp (tm), which had a project that was designed to replace an old system and some bits of other systems.

    Project was to take a year, and cost roughly $1 million a day.

    It followed exactly the same pattern described here, and it was delivered last week. 2 years late, slower than the old system, with crippling bugs still evident and it doesn't go any way to replacing anything other than the single system we wanted to replace.

    I'm just glad I got pushed off the project after bursting into laughter when I saw the first delivery of code. Even my old boss who stuck it through to the end said that was the point it should have been taken round the back and shot.

  • (cs) in reply to gabba
    gabba:
    I'm confused as to why "REWRITE LARGE SWATHS OF CODE" is considered an essential step on the way to software success.
    How about "DITCH FEATURES?" Apparently, that's required too, even after pushing back the schedule.
  • (cs) in reply to brazzy
    brazzy:
    Robert S. Robbins:
    I think documenting an application and its code is more important to maintenance than trying to generalize and abstract functionality. Developers just like to complicate things to the point where nobody can understand how the system works.
    No amount of documentation can fix an unmaintainable system. A really good system needs minimal documentation. Too much documentation means more of it will get outdated sooner, and not be fixed, and more of it will be simply wrong.

    On the other hand, it's certainly possible to make a system so general and with so many layers of abstraction that it becomes unmaintainable as well. To be maintainable, a system or API needs first and foremost be be simple. Then you don't need much documentation. Abstractions that remove complexity are good (unless you need the more complex features and cannot bypass the abstraction). Abstractions that add complexity are bad.

    Ah, generalisations ...

    Allow me to throw in my own.

    There is no correlation between the chewy goodness of a system and the amount of documentation required. Thus, I think, your (unintentionally) weasel words, "minimal" and "too much."

    There is, however, a distinct correlation between the literacy skills of 95% of programmers and the amount of documentation, leave alone decent and useful documentation, produced. This correlation leads to the under-production of documentation.

    There is also a distinct correlation between the "jobsworth" tendencies of the average, process-obsessed PHB and the amount of (entirely useless) documentation produced. This correlation leads to the over-production of documentation.

    Most projects, in documentation terms, are a heady mix of the two tendencies. In other words, you get lots of documentation to choose from, but none of it is useful. One project in my past had an entire server to hold the documentation -- almost all of which consisted of minutes from meetings over the previous three years. On the other hand, a description of the database schema was nowhere to be found.

    I'm continually amused by this agiley claim that "documentation gets outdated and [won't] be fixed and will [largely] be wrong." Ho ho, eh? You'd never say this about code, of course. Oh no. You can never have too much code.

    And it never gets outdated and won't be fixed and won't [largely] be wrong.

  • nobody (unregistered)

    To throw fuel on the private versus public sector fire, consider the Fannie Mae CORE project. This is a private company helped by private "world class" partner to spend hundreds of millions of dollars of stockholder's money to produce a system that could not be made to work. Bad management, bad requirements, bad architecture, bad engineering and bad testing.

    It had every one of the issues discussed in Avoiding Development Disasters article and invented a few of its own ways to fail.

    captcha: "craaazy" - how appropriate.

  • Mark (unregistered)

    I think something that is often overlooked in the cycle of failure that plagues large organizations is culture.

    A few years back I worked as a consultant for a global corp. I was on a team of about fifteen analysts helping them with an enterprise wide order provisioning system. We were tasked with helping to define and manage the requirements and then to coordinate integration testing and UAT.

    From the outset there were problems because there was massive distrust of anything that the internal IT department was involved with. IT had experienced a long run of WTFs, releasing product after product that never made it past beta (and the affected user base continued to use paper based 'systems'). In the case of the project I was working on, the centralized database for the existing 'system' was a room full of physical files.

    So anyway, here was a user community in desperate need of some help. Even an adequate IT solution would have been an upgrade to manually walking folders around the office to fulfill orders. But the users wanted no part of the project because of IT's reputation. And that became a self-fulfilling prophesy because some SMEs more or less refused to take part in requirements definition. Their attitude was 'build it and then I'll have a look at it'.

    So because it is so 'easy' to fail on a large project, I think it's common for IT groups in large organization to develop bad reputations with their internal clients. And from there it becomes difficult to recover.

    Just my 2 cents

    Oh, and to tell what happened on the project. The software did eventually launch into production and the teams that had provided SMEs to help define with requirements were happy with the product and the teams that hadn't wanted to scrap it. Last I heard, it was being pulled from production after about two years.

  • O Thieu Choi (unregistered)

    "In most other industries, equating completion and success would be ludicrous. After all, few in the construction industry would deem a two-year-old house with a leaky roof and a shifting foundation anything but a disaster -- and an outright liability."

    You don't know some of the builders I do, and I congratulate you on that.

  • nobody (unregistered) in reply to Anonymous
    Anonymous:
    Sad state of affairs, but the fix to this problem is: get rid of corrupt politicians. (yes; both parties - yes, it's the campaign finance system, STILL).
    Sorry, but politicians are not the problem here. It is bad executive management, bad project management and bad software engineering. Politicians bring in their own evils as do the general state of development practices within the DOJ (at the time and now), but this is clearly a common issue. Large projects fail because often because organizations don't practice the necessary techniques nor do they have sufficiently skilled management and staff to succeed.

    But you are right, holding those writing the checks accountable is part of the solution. As is holding the architects and the head project managers. In too many cases no one ends up accountable for massive failures. No pain, no improvement.

    captcha: "tacos" - Lunch time!

  • nobody (unregistered) in reply to O Thieu Choi
    O Thieu Choi:
    "In most other industries, equating completion and success would be ludicrous. After all, few in the construction industry would deem a two-year-old house with a leaky roof and a shifting foundation anything but a disaster -- and an outright liability."

    You don't know some of the builders I do, and I congratulate you on that.

    I too used to give construction industry counter-examples to poor software engineering. Having gone through the construction process of a few homes now I no longer make that mistake. It's not that they are worse than software efforts, it's that they aren't much better.

    I hope construction of larger buildings, bridges, dams, etc. are done far better. But I know there are examples that show that this is not always so.

  • (cs) in reply to brazzy
    brazzy:
    Robert S. Robbins:
    I think documenting an application and its code is more important to maintenance than trying to generalize and abstract functionality. Developers just like to complicate things to the point where nobody can understand how the system works.
    No amount of documentation can fix an unmaintainable system. A really good system needs minimal documentation. Too much documentation means more of it will get outdated sooner, and not be fixed, and more of it will be simply wrong.
    amen
    brazzy:
    On the other hand, it's certainly possible to make a system so general and with so many layers of abstraction that it becomes unmaintainable as well.
    I agree wholeheartedly
    brazzy:
    To be maintainable, a system or API needs first and foremost be be simple. Then you don't need much documentation. Abstractions that remove complexity are good (unless you need the more complex features and cannot bypass the abstraction). Abstractions that add complexity are bad.
    well put!

    I think one big problem about the documentation debate that happens between developers is that different developers take the general term "documentation" to mean different things.

    One form of documentation should describe a system, it's subsystems, and how it connects and interacts with other systems.

    Another form of documentation should describe the hack that had to be put in place to make the language/OS/device/etc. work.

    The code itself should be a form of documentation by describing what it is doing through variable and method names.

    A differnt form of documentation can describe the different levels of abstraction to help map new developers to important components much quicker.

    These are all forms of "documentation", but aren't the same thing. There are other forms of documentation, but this post is already too long. Not all programs need each type of documentation.

    The types and detail of documentation needed in an application is as specific to the application as its design. Maintaining the documentation is just as important as maintaing the code.

  • foppy (unregistered)

    Wow, training and experience are needed to run a successful project. Who'd a thunk it? The problem is twofold. Everyone thinks they're an expert in whatever job you assign them. Secondly, 90% of projects are filled up with people who solely are in to grab as much money as they can. The more consultants you add in, the quicker the money is bled with little regret of a failing project. Once the money is gone, they'll just move on to the next chump.

  • (cs)

    Computers only do exactly what someone tells them to (be they user, programmer, or hardware engineer). Saying "it's not the technology or the tools, it's the people" is kind of redundant. Though, being redundant doesn't make it wrong.

  • Mark (unregistered) in reply to foppy
    foppy:
    Wow, training and experience are needed to run a successful project. Who'd a thunk it? The problem is twofold. Everyone thinks they're an expert in whatever job you assign them. Secondly, 90% of projects are filled up with people who solely are in to grab as much money as they can. The more consultants you add in, the quicker the money is bled with little regret of a failing project. Once the money is gone, they'll just move on to the next chump.

    In large part, I agree with your characterization of consultants because most consultants come from the school of 'information hiding'. They rarely go into it with their client's best interest at heart. Their real goal is billable hours and therefore, they can actually become barriers to success.

    When I was a consultant, I occassionally interfaced with folks from Andersen Consulting (or Arther Andersen or whatever they were called) and they were just the worst at this. I would characterize them as a bunch of c*** blockers who did everything they could to stretch projects out.

  • (cs)
    Maintainable software begins at the highest level -- the Enterprise Architecture

    Oh, please. Most of the worst stories on this site have the word "Enterprise" in the title. Designing every application to be enterprisey right from the start is a recipe for disaster. Too much planning is just as worse as none at all. Planning too far ahead ends up smack against that cold hard thingie called "reality". The trick is to reconsider the scope of the project at every stage, and have the guts to react appropriately. A complete refactoring around 10000 LOC is manageable. At 100000, I'm not so sure anymore. At one million... yeah, I'm kidding.

    In other words do plan, and do what's appropriate for the project. Just don't rush to do the most impressive thing you can think of.

  • Scott (unregistered) in reply to real_aardvark
    real_aardvark:
    I'm continually amused by this agiley claim that "documentation gets outdated and [won't] be fixed and will [largely] be wrong." Ho ho, eh? You'd never say this about code, of course. Oh no. You can never have too much code.

    And it never gets outdated and won't be fixed and won't [largely] be wrong.

    I don't understand what you are trying to say. When an agilist says that the docs will become outdated, this means with respect to how the application actually works (the code). How can the code be outdated with respect to the code?

  • (cs) in reply to JohnFromTroy
    JohnFromTroy:
    Excellent essay.

    The rule-o-three thing is worthy of coder's scripture.

    http://c2.com/cgi/wiki?RuleOfThree

  • Badger (unregistered) in reply to Michael
    Michael:
    gabba:
    I'm confused as to why "REWRITE LARGE SWATHS OF CODE" is considered an essential step on the way to software success. Rewriting code is often necessary, but why is it put on the same level as fixing bugs and testing?
    The first writing is almost always a throw away, you should expect it and accept it, it will be significantly better the second time around. Once you build the code once, you have learned 90% of what was wrong with the design. A large rewrite at that point will pay off exponentially down the road.
    I'm currently in 3.5 years into a large software project, and we're nearing the end of the "REWRITE LARGE SWATHS OF CODE" part. Our path followed this diagram. I feel warm and fuzzy, we're nearly there.
  • SRM (unregistered)

    Interestingly neither the article nor any of the commenents mention software project risk management[*]. Either risk management was non-existent, or the management decided to ignore all classical warning signs of an impending project failure.

    [*] Well, kind of obvious given this is the WTF site...

  • Ollie Jones (unregistered)

    Info World has a pretty good writeup )3-21-2005) of this disastrous VCF software project, for which the prime contractor apparently was SAIC.

    http://www.infoworld.com/article/05/03/21/12FEfbi_1.html

    Before laughing our faces off at these guys, I think we should remember that the heinous events of Sept 11, 2001 and the ensuing FBI shakeup came right in the middle of this contract.

    I lament that the Sept 11 attacks were so successful. Maybe this $200M is one of the losses caused, at least partly, by the attacks.

  • Franz Kafka (unregistered) in reply to gabba
    gabba:
    I'm confused as to why "REWRITE LARGE SWATHS OF CODE" is considered an essential step on the way to software success. Rewriting code is often necessary, but why is it put on the same level as fixing bugs and testing?

    Because this is a flowchart for an awful, disfunctional system.

  • ARandomGuy (unregistered) in reply to JohnFromTroy
    JohnFromTroy:
    Excellent essay.

    The rule-o-three thing is worthy of coder's scripture.

    And it actually does show up in the GoF patterns book... (A fact that seems to be lost on all sorts of people who mistake design for jamming every pattern in the book into the code.)

    CAPTCHA: muhahaha

  • Franz Kafka (unregistered) in reply to Ollie Jones
    Ollie Jones:
    Info World has a pretty good writeup )3-21-2005) of this disastrous VCF software project, for which the prime contractor apparently was SAIC.

    http://www.infoworld.com/article/05/03/21/12FEfbi_1.html

    Before laughing our faces off at these guys, I think we should remember that the heinous events of Sept 11, 2001 and the ensuing FBI shakeup came right in the middle of this contract.

    I lament that the Sept 11 attacks were so successful. Maybe this $200M is one of the losses caused, at least partly, by the attacks.

    Then we can proceed laughing uproariously at them. 9/11 isn't an excuse to screw the pooch! 9/11 happened, now get on with your life. If this were a real war, with an enemy that actually threatened our security and you used a successful attack as an excuse to screw around for two years , you'd be thrown in a box.

    Seriously, WTF are you thinking?

  • Richard (unregistered) in reply to Tim Smith
    Tim Smith:
    LOL, it isn't just the government.

    It isn't just commercial software, either. Lots of FOSS suffers from the same Big Ball of Mud designs.

  • (cs) in reply to Mark
    Mark:
    When I was a consultant, I occassionally interfaced with folks from Andersen Consulting (or Arther Andersen or whatever they were called) and they were just the worst at this. I would characterize them as a bunch of c*** blockers who did everything they could to stretch projects out.
    Arthur Andersen was a financial auditing and accounting company that disbanded after the Enron disaster, and Andersen Consulting was its IT/Management consulting spinoff that was renamed to Accenture in 2001.

    And yes, they have a very bad reputation for doing anything to increase their share of a project - but pretty much all IT big consulting companies have the same reputation, and well-earned. It's the smaller companies and the freelancers that can supply quality and solve problems rather than prolonging them - if you find the right ones. They won't bloat a project in order to put in 20 more of their people because they don't have 20 more people on short notice. Of course that's not a good thing when you actually need 20 more people, which is why the big companies get hired.

  • XNeo (unregistered)

    Well, VCF failed and got scrapped by the government. That's where our german government is fundamentally different: Here, huge failures in software development (A2LL for example) just go in production anyways...

  • AuMatar (unregistered) in reply to Robert S. Robbins
    Robert S. Robbins:
    I am currently working on an application that loads the entire database schema into jagged arrays so you have no idea what column or table you may be dealing with in a query.

    I await reading your story here. Of course, since you seem to think this is a good idea, it will be posted by your coworkers.

  • will (unregistered)

    Knew someone who worked on this, and the story from the developers is not the same a presented here.

    This project was originally one of theses huge SEI CMM projects. You deliver the software and it is tested against the list of all requirements, make sure you get under the number of faults allowed, fix those get the final version accepted and hope you win the contract for the next release.

    Development was going good, they were replacing the old system and adding a bunch of new capability. Then 9/11 happened and the FBI decided to change everything around.

    So SAIC, the company developing this software, got the FBI to switch to a more angilish system where they would deliver some basic features get those tested start getting that deployed and continue. They did that came out with the first version and the FBI went and tested it as if it was the final version.

  • benny (unregistered)

    yay. a factual retelling of a well-known, bloated, mismanaged gov't project. Hardly a wtf. What's next? Therac-25?

  • LBL (unregistered)

    Repeat it with me: Agile... Software... Development...

Leave a comment on “Avoiding Development Disasters”

Log In or post as a guest

Replying to comment #156478:

« Return to Article