• different anon (unregistered) in reply to Franz Kafka
    Franz Kafka:
    Then we can proceed laughing uproariously at them. 9/11 isn't an excuse to screw the pooch! 9/11 happened, now get on with your life. If this were a real war, with an enemy that actually threatened our security and you used a successful attack as an excuse to screw around for two years , you'd be thrown in a box.

    Seriously, WTF are you thinking?

    Next time RTFA.

    FTA: "Only a few months after the ink was dry on these contracts, the Sept. 11 tragedy struck, reshaping the mission of the FBI. No longer would the Bureau be concerned merely with law enforcement. Instead, to protect against terrorism on U.S. soil, the FBI needed to get into the intelligence business.

    This shift turned the requirements for UAC inside out. Instead of beautifying old mainframe apps, the charter changed to replacing those applications with a new, collaborative environment for gathering, sharing, and analyzing evidence and intelligence data. "

    Also FTA: At the February 2005 hearing, Mueller said that the FBI delivered "finalized" requirements for VCF in June 2002, which included integrating the functionality of the five original ACS applications with the new system. But according to Hughes, the changes kept coming at a rate of more than one per day.

    Also, from Wikipedia: FBI Directors during the development of VCF (since 2000): Louis J. Freeh 1993–2001 Thomas J. Pickard 2001 (Acting director) Robert S. Mueller III 2001–present

    You're right, the new requirements wanted by a bureaucracy in turmoil guided by multiple directors has no connection to a project's failure! That's not a set of moving targets! </sarcasm>

  • ssanchez (unregistered)

    "Most expensive failed project", you have got to be kidding, it doesn't even come close. Have a look at the UK's National Health Service current IT project:

    http://search.bbc.co.uk/cgi-bin/search/results.pl?scope=all&edition=d&q=NHS+IT+system&go=Search

    To quote:

    The system is set to cost £6.8bn extra over 10 years.

    In addition, when the training and local implementation is taken into account, the figure rises to over £12bn.

    That's right, 6.8 BILLION - wait for it - POUNDS STERLING. That's a cool USD 13.6bn, more than it cost MS to produce Vista, including the marketing. And no-one in the UK seems in the slightest bit bothered that its destined for failure.

    CAPTCHA (speaking of failed IT ventures): atari

  • BigDigDug (unregistered) in reply to nobody
    nobody:
    I too used to give construction industry counter-examples to poor software engineering. Having gone through the construction process of a few homes now I no longer make that mistake. It's not that they are worse than software efforts, it's that they aren't much better.

    I hope construction of larger buildings, bridges, dams, etc. are done far better. But I know there are examples that show that this is not always so.

    I know one great example: Boston's Big Dig.

    Some highlights include the builders using deficient concrete and 2-ton concrete tiles that were GLUED to the roof of tunnels.

    Just like many IT projects, ultimately, it was an expensive pointless project to replace a working system with a new, flashy system that would solve all the problems of the old system because it was NEW.

    (It replaced a working highway with a new tunnel, because the raised highway was "ugly".)

    However, unlike most IT failures, this one did kill someone when one of the glued tiles became unglued.

  • Beau "Porpus" Wilkinson (unregistered)

    Very large software projects seem to be doomed to failure. This has been shown statistically, using case studies, using anecdotes like this one, etc., ad nauseam.

    In light of that fact, one might reasonably ask why we don't just look for opportunities to address needs using smaller systems. In the case of VCF, I suspect that the users in the field could have suggested small improvements or minimal new systems that would make their jobs easier in specific ways.

    The problem is that executives have to think big, and develop big "visions," to be recognized by our business culture. So we get "enterprise" apps that are supposed to revolutionize the way the users work... and they always seem to involve BizTalk, and/or Citrix, and/or bloated Ajax libraries, and whatever else looks good on paper to executives (who often can't even check e-mail).

    It's really depressing... it results in absurd instructions like "don't worry about fixing that invoice generator, we need you on the LARS project."

    (Yeah, Mr. Cook, I'm talking about you)

  • Corporate Cog (unregistered)

    It's a sad fact that both the current system I maintain and the last one with a former employer are both EOL. The latter is being rewritten from scratch (mostly by the folks who brought us the VCF). When the announcement came that it was to be rewritten, most of my coworkers were kind of shocked. Yes, it barely worked (like my current system), but at what cost? Just as Alex described would have taken place had the VCF system gone into production. Now my bad luck is following me and the subsystem of the system that I work on is going to be outsourced. And again, most of my coworkers are surprised.

    It seems most of the people I work with just expect software systems to be a big ball of mud and that there is no choice but to thrash against said ball indefinitely.

  • Fedaykin (unregistered)

    Good article, but I disagree with the proposed lifetime of 15 years for an enterprise application. Perhaps there are some environments where that makes sense, but there are certainly some where it simply does not.

    Think about it. If any large corporation was still using a system developed in ~1992, it would most likely be a DOS based terminal application running on a proprietary network solution. While it would hopefully be a solid application, it would utterly fail to deliver the functionality necessary for the business using it to exist in the modern world.

    I think ~5 years is a more realistic target for enterprise apps. The apps I develop I generally only consider viable for about 3 years, but that's partly because the customers I develop for have needs that change on a yearly and sometimes quarterly basis. No app can be designed to handle that much change over more than a few years and not collapse under its own weight (unless a great deal of effort (read:money) is spent developing it).

  • Sin Tax (unregistered)

    Expensive failed projects - that's not a real WTF.

    No, the real WTF is a project like the one I'm attached to. A public, underfunded customer who can't really afford the solution necessary or desired. Eternal fighting over price, deliveries, eternal discussion about whether to upgrade the obsolete and out-of-support J2EE/Portal product it is built on, a production environment that was put together by half-clueless developers (who were particularly clueless about building a production environment for high availability and maintainability), which is now so fragile, that the appservers need to be kicked every night. Management trying to fix problems by throwing more hardware or more people at them.

    The REAL WTF? This system is intended to be a core element of a national strategy. It is the poster case for how to do such a thing right. It has won countless awards, nationally and internationally. It actually manages to be marginally useful as well, yet everyone technical who has been clued a bit about its internals will agree that it's rotten to the core.

    Capthcha: Paint. Yeah, people don't care about quality. They care about shiny color.

    Sin Tax

  • Richard Sargent (unregistered) in reply to Fedaykin
    Fedaykin:
    Think about it. If any large corporation was still using a system developed in ~1992, it would most likely be a DOS based terminal application running on a proprietary network solution. While it would hopefully be a solid application, it would utterly fail to deliver the functionality necessary for the business using it to exist in the modern world.

    bzzzt.

    Go back to school. One part of the system I work with was developed 35-40 years ago. No one knows and no one is willing to bet on how many more years it will be around.

    Years ago I corresponded with a gentleman from England who injured himself laughing at the puppy who thought all systems should be replaced every five years. (Well, he didn't really hurt himself. ) He figured portions of his system would still be running when the 2037 time problem surfaces.

  • Mark (unregistered)

    As I see it, life-span is a business strategy consideration. IT is a service and so it has to adapt to the overall business strategy.

    I worked for a company that was very aggressive in terms of growth and entering new markets. Once the trigger was pulled, requirements gathering, development, testing and roll out would proceed at a breakneck speed. When I first started there it was a huge culture shock because I would see all these failure points being built into the system - but the business strategy was to enter the market as quickly as possible and deal with the clean up afterward. Even if the clean up eventually meant retooling the eventual system.

    The thinking from executive row was twofold.

    First, there's a pile of money there waiting to be made so don't let IT considerations slow you down - ever.

    Second, it's possible that we won't even want to stay in this market, so let's get a serviceable system in place, test the water and if we want to stay, we'll start fixing stuff.

    That was a real eye-opener for me because I really started to understand that in some situations, what we traditionally think of as solid SDLC practices can run counter to business strategy.

    I would never want to go back to working for that company because it was overly chaotic. But they are very successful with offices in four or five countries now. So it's hard to fault their approach.

  • (cs)

    VCF wasn't "worse than failure." It was just a plain failure.

  • ICanHaveProperGrammar (unregistered) in reply to Fedaykin
    Fedaykin:
    If any large corporation was still using a system developed in ~1992, it would most likely be a DOS based terminal application running on a proprietary network solution. While it would hopefully be a solid application, it would utterly fail to deliver the functionality necessary for the business using it to exist in the modern world.

    You have to be kidding! The MegaCorp(tm) I work for has 3 core systems that work together to handle 60% of their turnover (around a trillian US dollars annually). Of these

    1. System 1 is 25 years old
    2. System 2 is 20 years old
    3. System 3 is an attempt to replace some of the user interfaces into 1 & 2. It's (badly designed) GUI instead of (well designed) greenscreen and scheduled to be replaced before the other two are, simply because it's slowed the rate of data entry by 90%.

    I'm not saying it's easy wedging weekly changes into these apps, but if you really think that your banks, insurers, big oil, telecoms, power, water, gas, or government organisations aren't utterly reliant on "Enterprise" applications over a decade old, and failing miserably to find more recent apps that actually work better for them, then you are living in a fantasy world.

    Our latest "replacement and improvement" project delivered a system that is significantly slower and less functional than the 15 year old system that is being removed.

    Believe me, if you went round the top 100 companies (excluding big IT) in the world and blocked any Enterprise system over 10 years old, you'd be burning your neighbours house for light and warmth, and fighting your pets for their food inside a month.

  • Franz Kafka (unregistered) in reply to BigDigDug
    BigDigDug:
    I know one great example: Boston's Big Dig.

    Some highlights include the builders using deficient concrete and 2-ton concrete tiles that were GLUED to the roof of tunnels.

    Epoxy is not the same as elmer's glue - just because it's glue doesn't make it bad. Hell, spaceship one is composite and glue all over.

  • Franz Kafka (unregistered) in reply to different anon
    different anon:
    Franz Kafka:
    Then we can proceed laughing uproariously at them. 9/11 isn't an excuse to screw the pooch! 9/11 happened, now get on with your life. If this were a real war, with an enemy that actually threatened our security and you used a successful attack as an excuse to screw around for two years , you'd be thrown in a box.

    Seriously, WTF are you thinking?

    Next time RTFA.

    That doesn't excuse the mess. shit happens and you deal with it. You'd think the FBI would be able to do a better job than they did, but no. And really, 9/11 didn't reshape anything. All it did was underscore the need for beter cooperation among agencies, subject to very good laws limiting that cooperation.

  • (cs) in reply to Scott
    Scott:
    real_aardvark:
    I'm continually amused by this agiley claim that "documentation gets outdated and [won't] be fixed and will [largely] be wrong." Ho ho, eh? You'd never say this about code, of course. Oh no. You can never have too much code.

    And it never gets outdated and won't be fixed and won't [largely] be wrong.

    I don't understand what you are trying to say. When an agilist says that the docs will become outdated, this means with respect to how the application actually works (the code). How can the code be outdated with respect to the code?

    OK, I'll be much more obvious.

    Programmers (agile or otherwise) are fond of emphasising the code at the expense of the documentation. I contend that this is because a goodly proportion of programmers are illiterates with no communication skills to speak of. (And a further large proportion do not write English as a first language, which further complicates things.)

    The result is that programmers, on the whole, hate writing and maintaining documentation. Therefore they do not do it. And, since we're all casuists of the first order, we need an evasive explanation of why we don't do things that we don't like. Thus, "documentation is unnecessary/gets out-of-date/was eaten by my dog."

    Niklaus Wirth believes that Algorithms + Data Structures = Programs. For all but the most trivial examples, I believe that Programs + Documentation = Usable System.

    All I was trying to say was that, if you can maintain the code, you can also maintain the documentation. The full semantics of a system are never contained solely in the code.

    How, for example, do you use the code to explain to future programmers why you've made a particular binary choice? Say you've chosen to use a hashmap rather than a RB-style map. Take one of these two possibilities:

    (1) Hashmap chosen for efficiency reasons. Yes, let's put that as a comment every time we use the hashmap. Far preferable to a single line in a design document. (2) Hashmap chosen because of obscure bug in RB map library. This should also be documented. What happens if a future release of the library fixes the bug?

    Things that do not get implemented in the code cannot be "documented" in the code. There are rather a lot of these in the average large-scale project.

    Don't be lazy. Don't be pitiful. Document fully. Documentation is a first-class citizen in the programming world, just as is coding, testing, and source control: and for precisely the same reason -- maintenance.

  • (cs)

    "After all, few in the construction industry would deem a two-year-old house with a leaky roof and a shifting foundation anything but a disaster" That is not correct. In country(Estonia) where I come from, new houses get guaranty for 2 years. So because of that and because of uses of unqualified personal for building, new construction usually start leaking from roof much before the deadline( 2 years ). And builders consider that normal :D

  • c (unregistered)

    there is a line missing on the diagram, from Outsource to India. The cynic would say it should go straight to failure, otherwise it should go back to the start.

  • (cs) in reply to BigDigDug
    BigDigDug:
    I know one great example: Boston's Big Dig.

    Some highlights include the builders using deficient concrete and 2-ton concrete tiles that were GLUED to the roof of tunnels.

    Just like many IT projects, ultimately, it was an expensive pointless project to replace a working system with a new, flashy system that would solve all the problems of the old system because it was NEW.

    (It replaced a working highway with a new tunnel, because the raised highway was "ugly".)

    Bullshit. The "working highway" was badly constructed in terms of making the traffic flow quickly and thus suffered from constant congestion.

    Additionally, the project included the rerouting of the traffic to and from Boston's main airport through a tunnel that bypasses the downtown area.

    The project may have been badly planned and executed, but it was far from pointless.

  • (cs) in reply to gabba
    gabba:
    Rewriting code is often necessary, but why is it put on the same level as fixing bugs and testing?

    If that means refactoring, yes, it is just as important as fixing bugs and testing. Consider: your can always do more testing and fix more bugs. Therefore, if something really is less important than testing and fixing bugs, the only rational allocation of resources is not to do that thing at all; that is, use all the resources you would have allocated to it for testing and bug-fixing.

    It might be claimed that you can not refactor the code because you need all those resources to fix important bugs. But if you are overwhelmed by important bugs, that suggests the design or specification of your code is faulty, which are problems no amount of testing and bug-fixing will solve.

  • (cs) in reply to Mark
    Mark:
    I think something that is often overlooked in the cycle of failure that plagues large organizations is culture

    That came home to me in a previous job. The company culture was to have a can do attitude; the company was very proud of that attitude, and senior managers often spoke approvingly of it. And who can blame them, what a good culture to have, right? No jobs-worths and time-servers about.

    But that asset had a downside: the company was incapable of dealing with negative information and bad news. Consequently, the specifications for projects were appalling, because nobody could stand up and say we can not produce this product because its specification is inadequate.

  • (cs) in reply to Michael
    Michael:
    gabba:
    I'm confused as to why "REWRITE LARGE SWATHS OF CODE" is considered an essential step on the way to software success. Rewriting code is often necessary, but why is it put on the same level as fixing bugs and testing?
    The first writing is almost always a throw away, you should expect it and accept it, it will be significantly better the second time around. Once you build the code once, you have learned 90% of what was wrong with the design. A large rewrite at that point will pay off exponentially down the road.

    Plan to throw one away; you will, anyhow -- F. Brooks

  • (cs)

    I love the photo you chose, Alex. It reminds me of "The Adams Family" tv series.

  • (cs) in reply to Grumpy
    Grumpy:
    When did I subscribe for a 2-page essay? Too many words ...

    So ask for a refund of your subscription price, moron.

  • Codeville (unregistered)

    I was just thinking yesterday about how this makes developers feel at different stages of the process:

    http://blog.codeville.net/2007/10/09/version-101/

  • different anon (unregistered) in reply to Franz Kafka
    Franz Kafka:
    That doesn't excuse the mess. shit happens and you deal with it.

    "Deal with it" means dealing with the moving targets... how, exactly? The government didn't stop adding demands and the programmers couldn't say "go fuck yourselves." Or was there another bureaucracy willing to foot the bill?

    How would you have done it without losing your funding?

  • Jack (unregistered)

    Discussing failed software always reminds me of the Anna Karenina principle: "[Successful projects] are all alike; every [failed project] is [a failure] in its own way." (http://en.wikipedia.org/wiki/Anna_Karenina#Plot_summary)

    The diagram shows that happy path. I believe any of us could describe new failure paths from our experience and that there would be no limit and no duplication to such paths.

    That's not to say, of course, that there aren't generalizations: insufficient testing, ignoring/misinterpreting requirements, overwhelming time constraints, etc. But each of these has a different root cause in each failed project.

  • Mark (unregistered) in reply to Raedwald

    I know exactly what you are saying because I've been in that exact situation as well (at a different company). Only I did stand up and call the specification crap (diplomatically, of course) and I was rewarded by being branded as an obstacle thrower.

    I feel I am adequately sensitive to over-design. But requirements that are nothing more 'the system must work good-like' don't really give the developer anything to work from. So what happened was that there was a lot of back-channel communication with the developer calling a SME and building stuff straight into the app with on-the-fly requirements.

    You can probably guess how that ended.

  • Fedaykin (unregistered) in reply to Richard Sargent

    Yo should learn to read. I specifically said there were circumstances where software with a long lifespan made sense.

    Don't be a jackass.

  • BigDigDug (unregistered) in reply to brazzy
    brazzy:
    Bullshit. The "working highway" was badly constructed in terms of making the traffic flow quickly and thus suffered from constant congestion.

    Additionally, the project included the rerouting of the traffic to and from Boston's main airport through a tunnel that bypasses the downtown area.

    The project may have been badly planned and executed, but it was far from pointless.

    Oh please, the whole tunnel thing was pointless. There are these two technologies that are simpler than tunnels and that have worked for thousands of years: plain old roads and bridges.

    Throwing the highway underground for aesthetic reasons is a perfect analogy to people using "enterprise" technologies when simpler technologies would work better and cheaper.

    And to whoever complained that the tiles weren't "Elmer's glued but epoxied": 1. I never said "Elmer's glue" and 2. "glue" and "epoxy" are synonyms.

  • Joe (unregistered) in reply to BigDigDug
    BigDigDug:
    I know one great example: Boston's Big Dig.

    Some highlights include the builders using deficient concrete and 2-ton concrete tiles that were GLUED to the roof of tunnels.

    Just like many IT projects, ultimately, it was an expensive pointless project to replace a working system with a new, flashy system that would solve all the problems of the old system because it was NEW.

    (It replaced a working highway with a new tunnel, because the raised highway was "ugly".)

    However, unlike most IT failures, this one did kill someone when one of the glued tiles became unglued.

    I drive through that tunnel everyday. Shortly after that tile fell on the poor woman and killed her, you could see people looking up at the ceiling as they drove through the tunnel. Apparently, rear ending someone at 50 MPH is preferable to having a 2 ton concrete tile come unglued and fall on you. Pick your poison.

    Oh, and for those of you outside Massachusetts, don't feel like you're absolved of it. When the project began incurring billions of dollars of cost overruns, the Feds bailed us out. Yup. Farmers in Wichita are having their income taken from them to pay for our clusterf*ck.

    Sorry folks.

    Captcha: doom - it's like the captcha bot knows what we're talking about. Shhh!

  • Coward (unregistered)

    Hmmm....those failures sound a little bit like working at True.com...

  • Todd (unregistered)

    I did a case study on VCF a while back. While looking through the mounds of information of it, I found the IEEE had a real good write up on the details. You can read it at: http://www.spectrum.ieee.org/sep05/1455

  • different anon (unregistered) in reply to BigDigDug
    BigDigDug:
    brazzy:
    The project may have been badly planned and executed, but it was far from pointless.
    Oh please, the whole tunnel thing was pointless. There are these two technologies that are simpler than tunnels and that have worked for thousands of years: plain old roads and bridges.
    Was there the space to add another glut of bridges and roads downtown and to the airport? Part of Boston's insanity is the lack of available space above ground.
  • EricS (unregistered)

    http://en.wikipedia.org/wiki/The_Mythical_Man-Month

    What was true then is true now.

  • TC (unregistered)

    The software projects I have worked on that have been 'failures' are predominately due to business/management practices and culture. Occasionally Ive come across incompetent coding but it is not the worst contributor to failure of the projects Ive been involved in.

    Generally, businesses want minimal time and minimal cost spent on a project but don't realize (or care) about the cost to quality. The only people who are aware of software development complexity - and therefore understand the time/cost/quality relationship - are developers or former developers. Try to explain to a non-coder the unit tests, patters, workarounds, bugs, dependencies (to name but a few) that contribute to a screen that appears simplistic to the users. They just don't get it.

    On the technical side I think the tools available to developers are still quite immature and lack the kind of integration that help eliminate (through automation) common problems that drag down the software development process. For example, I'm convinced of the worth of unit testing however at present the time it takes to write good unit tests explains why many developers avoid them, even with the benefits they bring further into the life cycle of a project.

  • blindman (unregistered)

    "No one could have predicted that the software would have changed in the manner that it did." Bull$h1t. Scalability should be anticipated in every application. Only a complete noob would assume they fully understand the clients requirements, or the even the client fully understands their requirements. Duh.

  • Mark (unregistered) in reply to TC
    TC:
    The software projects I have worked on that have been 'failures' are predominately due to business/management practices and culture.

    Bingo.

    I once saw a BRD that had a bunch of really complex specifications but one of the last 'requirements' listed was 'the system must be launched by July 1.' It was late May when the BRD was released. So I was like 'July 1 of what year?'

  • Franz Kafka (unregistered) in reply to different anon
    different anon:
    Franz Kafka:
    That doesn't excuse the mess. shit happens and you deal with it.

    "Deal with it" means dealing with the moving targets... how, exactly? The government didn't stop adding demands and the programmers couldn't say "go fuck yourselves." Or was there another bureaucracy willing to foot the bill?

    How would you have done it without losing your funding?

    That comment was directed at the FBI. I'd have taken the project managers that were harrassing the contractors out to the woodshed, so to speak, and replaced them if they didn't stop screwing up.

  • nobody (unregistered) in reply to blindman
    blindman:
    "No one could have predicted that the software would have changed in the manner that it did." Bull$h1t. Scalability should be anticipated in every application. Only a complete noob would assume they fully understand the clients requirements, or the even the client fully understands their requirements. Duh.
    Uh, scalability is an architectural quality describing the efficiency and limit to which a system can take advantage of additional resources (computational, storage, network, etc.) to handle greater or lesser workloads. Few systems have arbitrary scalability and with degrees of scalability come costs charged against other qualities (response time, throughput (for given set of resources), complexity, etc.) and therefore trade-offs need to be made.

    Scalability does not mitigate the obvious fact that one rarely knows all of the requirements nor that fact that requirements will (almost always) change as a system of any significant complexity is built. It has nothing to do with that. The process you need to employ is Requirements Management. It is a continuous process and is named appropriately - it is not simply Requirements Gathering or Requirements Documentation. Things will change - always.

    Oh, and as another noted, Risk Management is essential in managing any significant project. Sometimes this means knowing when to pull the plug because the circumstances no longer allow the project to succeed. If you start designing and aircraft carrier and requirements change so that you now need a submarine, it's best to stop, re-group and approach the new requirements as a new project without the baggage of what you've done so far. OK, in real life it is seldom so black-and-white, but it is in circumstances like those experience by VCF that management, architects and other leaders earn their keep.

  • joe (unregistered) in reply to ParkinT

    Here's SAIC's (the government contractor) side of the story:

    http://www.saic.com/cover-archive/law/trilogy.html

  • Synonymous Awkward (unregistered) in reply to Corporate Cog

    It's just been decided that a notoriously large and muddy application(it eats souls) where I work is to be replaced in its entirety. I've never heard a meeting room full of overworked programmers cheer at being given extra work before.

    As far as documentation goes, I always liked Lisp's docstring approach. Having the documentation as part of the code helps me concentrate that little extra on keeping it up-to-date; with a well-written program (not that I ever write those) you might even be able to use docstrings as part of your user-help system, which I suppose would be another incentive.

  • Paul B (unregistered)

    Can we get a high resolution version of the code to ruin diagram? I'd like to print it out and stick it on the wall.

  • The Frinton Mafia (unregistered)

    Compared to the gigantic disasters that the UK's NHS is capable of, the US security services are actually fairly small and responsible organizations.

    http://burningourmoney.blogspot.com/2006/03/latest-on-nhs-computer-disaster.html

  • (cs) in reply to Synonymous Awkward
    Synonymous Awkward:
    It's just been decided that a notoriously large and muddy application(it eats souls) where I work is to be replaced in its entirety. I've never heard a meeting room full of overworked programmers cheer at being given extra work before.

    As far as documentation goes, I always liked Lisp's docstring approach. Having the documentation as part of the code helps me concentrate that little extra on keeping it up-to-date; with a well-written program (not that I ever write those) you might even be able to use docstrings as part of your user-help system, which I suppose would be another incentive.

    Lisp docstring, Javadoc, and other manifestations of literate programming are arguably better than nothing.

    Equally, they are arguably worse than nothing.

    Simply put, there is no one-to-one relationship between any part of the code and any part of the documentation. Nor is there a one-to-many, many-to-one, or many-to-many relationship. Nor is it possible to impose an arbitrary mapping such as "take this API entry, format it as X for the design, Y for the help page, Z for the user guide." Yes, you'll get "documentation," but the signal-to-noise ratio will render it unusable.

    The only viable alternative is to learn to read and write. Third-graders can do it, so why not programmers?

    If you want the system you're working on to last the proverbial fifteen years or more, you really do need to put some effort into producing documentation and keeping it up to date.

  • blindman (unregistered) in reply to nobody
    nobody:
    blindman:
    "No one could have predicted that the software would have changed in the manner that it did." Bull$h1t. Scalability should be anticipated in every application. Only a complete noob would assume they fully understand the clients requirements, or the even the client fully understands their requirements. Duh.
    Uh, scalability is an architectural quality describing the efficiency and limit to which a system can take advantage of additional resources (computational, storage, network, etc.) to handle greater or lesser workloads.
    Uh, as a database designer, I can tell you that scalability absolutely includes the ability to expand upon functionality without having to scrap the current design. If your definition does not include this, that speaks volumes about the quality of the applications you develop.
  • wgc (unregistered) in reply to ammoQ
    ammoQ:
    Anyway "nearly a million lines of code" doesn't seem very much to me. I wonder how they managed to spend 200M for that quantity of code.

    No kidding! Where can I get paid #200/loc?

  • (cs) in reply to blindman
    blindman:
    nobody:
    blindman:
    "No one could have predicted that the software would have changed in the manner that it did." Bull$h1t. Scalability should be anticipated in every application. Only a complete noob would assume they fully understand the clients requirements, or the even the client fully understands their requirements. Duh.
    Uh, scalability is an architectural quality describing the efficiency and limit to which a system can take advantage of additional resources (computational, storage, network, etc.) to handle greater or lesser workloads.
    Uh, as a database designer, I can tell you that scalability absolutely includes the ability to expand upon functionality without having to scrap the current design. If your definition does not include this, that speaks volumes about the quality of the applications you develop.
    Hmmmm.
    Lewis Carroll:
    "--and that shows that there are three hundred and sixty-four days when you might get un-birthday presents--"

    "Certainly," said Alice.

    "And only one for birthday presents, you know. There's glory for you!"

    "I don't know what you mean by 'glory,'" Alice said.

    Humpty Dumpty smiled contemptuously. "Of course you don't--till I tell you. I meant, 'there's a nice knock-down argument for you!'"

    "But 'glory' doesn't mean 'a nice knock-down argument,'" Alice objected.

    "When I use a word," Humpty Dumpty said, in a rather scornful tone, "it means just what I choose it to mean--neither more nor less."

    As a legendary egg, I refute the conventional Database Designer definition of "scalability." 99% of household germs, or software engineers as Humpty would have it, use "scalability" to mean precisely what "nobody" says it means.

    I think the word you're looking for is "extensibility," and if your definition of that doesn't include "the ability to check before you pontificate," then that speaks volumes ...

  • wgc (unregistered) in reply to BigDigDug
    Oh please, the whole tunnel thing was pointless. There are these two technologies that are simpler than tunnels and that have worked for thousands of years: plain old roads and bridges.

    Throwing the highway underground for aesthetic reasons is a perfect analogy to people using "enterprise" technologies when simpler technologies would work better and cheaper.

    Never been there, huh? The project may have been mismanaged, delayed, have cost-overruns, and a few well-publicized goofs, but it's a huge success. As someone who used to commute through that, I'm convinced it would save almost an hour a day of sitting in traffic. Multiply that by however many tens of thousands of cars doing the same thing and you get a huge improvement. I no longer do that commute but it saves at least 20 minutes every time I go to the airport. This project is a success for most of its users (if not the people who paid for it).

    I used to work in a 35 story building that was literally feet away from the old elevated highway: maybe room for a sidewalk between them. How do you enlarge that road? How do you deal with all the cross-roads and connections? One of the biggest problems with the old road was too many old ramps with no room for merge lanes or exit lanes: where can you put those? Another issue with the old road was too sharp of bends, with buildings in the way: how do you straighten those? Building vertically gives more room for extra lanes, ramps, and cross-streets. Building underground gives room for those plus straightening the road without moving buildings. Which would have been more expensive: building underground or moving buildings?

  • Hognoxious (unregistered) in reply to Raedwald
    Raedwald:
    That came home to me in a previous job. The company culture was to have a can do attitude
    Yeah, I heard of a guy who had that. Icarus, I think that was his name.

    http://despair.com/delusions.html

  • nobody (unregistered) in reply to blindman
    blindman:
    nobody:
    blindman:
    "No one could have predicted that the software would have changed in the manner that it did." Bull$h1t. Scalability should be anticipated in every application. Only a complete noob would assume they fully understand the clients requirements, or the even the client fully understands their requirements. Duh.
    Uh, scalability is an architectural quality describing the efficiency and limit to which a system can take advantage of additional resources (computational, storage, network, etc.) to handle greater or lesser workloads.
    Uh, as a database designer, I can tell you that scalability absolutely includes the ability to expand upon functionality without having to scrap the current design. If your definition does not include this, that speaks volumes about the quality of the applications you develop.
    What? It is not my definition - it is the accepted INDUSTRY definition. What you describe is generally known as flexibility, extensibility or maintainability - all important architectural qualities as well. In fact, these are at times MORE IMPORTANT than extensive scalability.

    Perhaps you should become more familiar with the definition of industry terms. http://en.wikipedia.org/wiki/Scalability http://www.pcmag.com/encyclopedia_term/0,2542,t=scalable&i=50835,00.asp http://datageekgal.blogspot.com/2007/04/10g-performance-and-tuning-guide-first.html (the latter thrown in for you database guys)

    I suppose one can think of scaling of the problem space (the ability to handle additional requirements) but that is an unusual usage of the term. Not wrong I suppose, just unusual. The problem with the unusual usage is that it is likely to create confusion among others as they will (most likely) be thinking about workload handling and not requirements handling.

    But perhaps you work in a community where that definition is common. I am not familiar with any such community.

  • nobody (unregistered) in reply to joe
    joe:
    Here's SAIC's (the government contractor) side of the story:

    http://www.saic.com/cover-archive/law/trilogy.html

    A very interesting read. I've seen a few large projects where the client has insisted on a "flash cut-over". These did not go well. For complex, interconnected and "mission critical" systems, anything other than a risk-averse, incremental roll-out is a VERY BAD IDEA.

    When I have not walked away from a project where the client continued to insist upon such a plan when the architects and project managers insist it is ill-advised I have ALWAYS regretted it. Clients get stupid ideas. Professionals tell them the truth and don't let them go on adventures that will end in failure. Or at least they don't help them fail.

    Unfortunately for clients and customers, our industry does is not a profession.

Leave a comment on “Avoiding Development Disasters”

Log In or post as a guest

Replying to comment #156690:

« Return to Article