Avoiding Development Disasters

« Return to Article
  • ParkinT 2007-10-09 11:02
    Sad.
    A typical example of US Government program.

    God bless America {it needs all the help it can get}
  • Anon 2007-10-09 11:19
    ParkinT:
    Sad.
    A typical example of US Government program.

    Sadly, it isn't: most US government programs get put into production EVEN THOUGH they work as well as VCF did.
  • Pap 2007-10-09 11:20
    The executive summary: RTFM!!
  • greywar 2007-10-09 11:21
    its not just the us govt, its governments in general. A government doesn't feel constrained by profitability, so they feel more then willing to extend additional features no matter the cost.

    Then add in that theres little accountability....and presto! instant disaster.
  • Tim Smith 2007-10-09 11:26
    LOL, it isn't just the government.

    It sort of reminds me of the old joke that there is always an asshole in every group. Look around, if you don't see one you are the asshole.

    Every sector of business has produced bad applications. If you can't find an example around you, it is probably your application that that is the piece of trash destined to suck the life blood out of every programmer that touches it.
  • ammoQ 2007-10-09 11:27
    Those failures are not specific to the U.S.
    I know of several similary costly failures in Europe, too.

    Anyway "nearly a million lines of code" doesn't seem very much to me. I wonder how they managed to spend 200M for that quantity of code.
  • Grumpy 2007-10-09 11:28
    When did I subscribe for a 2-page essay?
    Too many words ...
  • Brandon 2007-10-09 11:43
    Grumpy:
    When did I subscribe for a 2-page essay?
    Too many words ...

    Get over it... that's the problem, "managers" who are like you get their job and are too lazy to educate themselves more on software management. This article isn't exactly new material, but it stresses the importance of having formal training and knowing when to make these things called "decisions" which, amazingly, make an impact on business.
  • gabba 2007-10-09 11:44
    I'm confused as to why "REWRITE LARGE SWATHS OF CODE" is considered an essential step on the way to software success. Rewriting code is often necessary, but why is it put on the same level as fixing bugs and testing?
  • Pecos Bill 2007-10-09 11:45
    ammoQ:
    Those failures are not specific to the U.S.
    I know of several similary costly failures in Europe, too.

    Anyway "nearly a million lines of code" doesn't seem very much to me. I wonder how they managed to spend 200M for that quantity of code.


    Duh! Applying scope creep to their requirements!
  • Robert S. Robbins 2007-10-09 11:48
    I think documenting an application and its code is more important to maintenance than trying to generalize and abstract functionality. Developers just like to complicate things to the point where nobody can understand how the system works.

    There are only a few changes you should plan for like a change in the database design. I would just document where you need to update code and queries to accomodate a new column rather than attempt to make the application independent of its database design.

    I am currently working on an application that loads the entire database schema into jagged arrays so you have no idea what column or table you may be dealing with in a query.
  • FredSaw 2007-10-09 11:57
    Good essay. I'll share it with my team.

    Management: Developers were both poorly managed and micromanaged.
    Micromanagement is a subset of poor management. Been there, experienced that.
  • Anteater 2007-10-09 11:57
    Hey isn't that picture a screenshot from Final Fantasy VII?
  • Michael 2007-10-09 11:59
    gabba:
    I'm confused as to why "REWRITE LARGE SWATHS OF CODE" is considered an essential step on the way to software success. Rewriting code is often necessary, but why is it put on the same level as fixing bugs and testing?
    The first writing is almost always a throw away, you should expect it and accept it, it will be significantly better the second time around. Once you build the code once, you have learned 90% of what was wrong with the design. A large rewrite at that point will pay off exponentially down the road.
  • Spectre 2007-10-09 11:59
    So Alex is running out of submissions, eh? Nice diagram, though...
  • JohnFromTroy 2007-10-09 12:07
    Excellent essay.

    The rule-o-three thing is worthy of coder's scripture.
  • Anonymous 2007-10-09 12:10
    Bull and Shit.

    VCF was written by a private company, for a public customer.

    This does not "prove" in any way, any dubious axiom that "private industry always does it better than the government". The problem with VCF was more to do with lack of adequate oversight. The people writing the checks were not held responsible for the results. So they didn't make sure that the people writing the software were held responsible for the results. Sad state of affairs, but the fix to this problem is: get rid of corrupt politicians. (yes; both parties - yes, it's the campaign finance system, STILL).
  • brazzy 2007-10-09 12:13
    Robert S. Robbins:
    I think documenting an application and its code is more important to maintenance than trying to generalize and abstract functionality. Developers just like to complicate things to the point where nobody can understand how the system works.

    No amount of documentation can fix an unmaintainable system. A really good system needs minimal documentation. Too much documentation means more of it will get outdated sooner, and not be fixed, and more of it will be simply wrong.

    On the other hand, it's certainly possible to make a system so general and with so many layers of abstraction that it becomes unmaintainable as well. To be maintainable, a system or API needs first and foremost be be simple. Then you don't need much documentation. Abstractions that remove complexity are good (unless you need the more complex features and cannot bypass the abstraction). Abstractions that add complexity are bad.
  • Richard Sargent 2007-10-09 12:22

    As for knowing when to generalize, Haack lives by the rule of three: "The first time you notice something that might repeat, don't generalize it. The second time the situation occurs, develop in a similar fashion -- possibly even copy/paste -- but don't generalize yet. On the third time, look to generalize the approach."


    P.J.Plauger published an article many years ago. I think it was in Software Development magazine, sometime in 1989. The article was about what he called the "0, 1, infinity" rule.

    In essence, once the pattern goes multiple, it is time to design accordingly. Switch from the singular pattern into a multiple pattern and be ready for all future enhancements of the same nature. They will come. Build it. :-)
  • nickinuse 2007-10-09 12:24
    Makes me wonder who could possibly write the system for Counter Terrorist Unit in 24 Hours ;)

    CAPTCHA: smile, yeah I did.
  • different anon 2007-10-09 12:32
    Anonymous:
    Bull and Shit.

    VCF was written by a private company, for a public customer.

    This does not "prove" in any way, any dubious axiom that "private industry always does it better than the government".

    Did you even read the essay?

    That axiom has nothing to do with the issue at hand. The essay was about terrible management and its results, not on public vs. private industry.
  • T $ 2007-10-09 12:58
    My favorite was the Receive Requests for New Feature --> Bugger off! --> Visit from Upper Management --> Project Outsourced to India
  • ICanHaveProperGrammar 2007-10-09 13:07
    It's definitely not just government. I work for MegaCorp (tm), which had a project that was designed to replace an old system and some bits of other systems.

    Project was to take a year, and cost roughly $1 million a day.

    It followed exactly the same pattern described here, and it was delivered last week. 2 years late, slower than the old system, with crippling bugs still evident and it doesn't go any way to replacing anything other than the single system we wanted to replace.

    I'm just glad I got pushed off the project after bursting into laughter when I saw the first delivery of code. Even my old boss who stuck it through to the end said that was the point it should have been taken round the back and shot.
  • operagost 2007-10-09 13:10
    gabba:
    I'm confused as to why "REWRITE LARGE SWATHS OF CODE" is considered an essential step on the way to software success.

    How about "DITCH FEATURES?" Apparently, that's required too, even after pushing back the schedule.
  • real_aardvark 2007-10-09 13:19
    brazzy:
    Robert S. Robbins:
    I think documenting an application and its code is more important to maintenance than trying to generalize and abstract functionality. Developers just like to complicate things to the point where nobody can understand how the system works.

    No amount of documentation can fix an unmaintainable system. A really good system needs minimal documentation. Too much documentation means more of it will get outdated sooner, and not be fixed, and more of it will be simply wrong.

    On the other hand, it's certainly possible to make a system so general and with so many layers of abstraction that it becomes unmaintainable as well. To be maintainable, a system or API needs first and foremost be be simple. Then you don't need much documentation. Abstractions that remove complexity are good (unless you need the more complex features and cannot bypass the abstraction). Abstractions that add complexity are bad.

    Ah, generalisations ...

    Allow me to throw in my own.

    There is no correlation between the chewy goodness of a system and the amount of documentation required. Thus, I think, your (unintentionally) weasel words, "minimal" and "too much."

    There is, however, a distinct correlation between the literacy skills of 95% of programmers and the amount of documentation, leave alone decent and useful documentation, produced. This correlation leads to the under-production of documentation.

    There is also a distinct correlation between the "jobsworth" tendencies of the average, process-obsessed PHB and the amount of (entirely useless) documentation produced. This correlation leads to the over-production of documentation.

    Most projects, in documentation terms, are a heady mix of the two tendencies. In other words, you get lots of documentation to choose from, but none of it is useful. One project in my past had an entire server to hold the documentation -- almost all of which consisted of minutes from meetings over the previous three years. On the other hand, a description of the database schema was nowhere to be found.

    I'm continually amused by this agiley claim that "documentation gets outdated and [won't] be fixed and will [largely] be wrong." Ho ho, eh? You'd never say this about code, of course. Oh no. You can never have too much code.

    And it never gets outdated and won't be fixed and won't [largely] be wrong.
  • nobody 2007-10-09 13:23
    To throw fuel on the private versus public sector fire, consider the Fannie Mae CORE project. This is a private company helped by private "world class" partner to spend hundreds of millions of dollars of stockholder's money to produce a system that could not be made to work. Bad management, bad requirements, bad architecture, bad engineering and bad testing.

    It had every one of the issues discussed in Avoiding Development Disasters article and invented a few of its own ways to fail.
    ----------------------

    captcha: "craaazy" - how appropriate.
  • Mark 2007-10-09 13:25
    I think something that is often overlooked in the cycle of failure that plagues large organizations is culture.

    A few years back I worked as a consultant for a global corp. I was on a team of about fifteen analysts helping them with an enterprise wide order provisioning system. We were tasked with helping to define and manage the requirements and then to coordinate integration testing and UAT.

    From the outset there were problems because there was massive distrust of anything that the internal IT department was involved with. IT had experienced a long run of WTFs, releasing product after product that never made it past beta (and the affected user base continued to use paper based 'systems'). In the case of the project I was working on, the centralized database for the existing 'system' was a room full of physical files.

    So anyway, here was a user community in desperate need of some help. Even an adequate IT solution would have been an upgrade to manually walking folders around the office to fulfill orders. But the users wanted no part of the project because of IT's reputation. And that became a self-fulfilling prophesy because some SMEs more or less refused to take part in requirements definition. Their attitude was 'build it and then I'll have a look at it'.

    So because it is so 'easy' to fail on a large project, I think it's common for IT groups in large organization to develop bad reputations with their internal clients. And from there it becomes difficult to recover.

    Just my 2 cents

    Oh, and to tell what happened on the project. The software did eventually launch into production and the teams that had provided SMEs to help define with requirements were happy with the product and the teams that hadn't wanted to scrap it. Last I heard, it was being pulled from production after about two years.
  • O Thieu Choi 2007-10-09 13:30
    "In most other industries, equating completion and success would be ludicrous. After all, few in the construction industry would deem a two-year-old house with a leaky roof and a shifting foundation anything but a disaster -- and an outright liability."

    You don't know some of the builders I do, and I congratulate you on that.
  • nobody 2007-10-09 13:31
    Anonymous:
    Sad state of affairs, but the fix to this problem is: get rid of corrupt politicians. (yes; both parties - yes, it's the campaign finance system, STILL).

    Sorry, but politicians are not the problem here. It is bad executive management, bad project management and bad software engineering. Politicians bring in their own evils as do the general state of development practices within the DOJ (at the time and now), but this is clearly a common issue. Large projects fail because often because organizations don't practice the necessary techniques nor do they have sufficiently skilled management and staff to succeed.

    But you are right, holding those writing the checks accountable is part of the solution. As is holding the architects and the head project managers. In too many cases no one ends up accountable for massive failures. No pain, no improvement.
    --------------------

    captcha: "tacos" - Lunch time!
  • nobody 2007-10-09 13:44
    O Thieu Choi:
    "In most other industries, equating completion and success would be ludicrous. After all, few in the construction industry would deem a two-year-old house with a leaky roof and a shifting foundation anything but a disaster -- and an outright liability."

    You don't know some of the builders I do, and I congratulate you on that.

    I too used to give construction industry counter-examples to poor software engineering. Having gone through the construction process of a few homes now I no longer make that mistake. It's not that they are worse than software efforts, it's that they aren't much better.

    I hope construction of larger buildings, bridges, dams, etc. are done far better. But I know there are examples that show that this is not always so.
  • dphunct 2007-10-09 13:44
    brazzy:
    Robert S. Robbins:
    I think documenting an application and its code is more important to maintenance than trying to generalize and abstract functionality. Developers just like to complicate things to the point where nobody can understand how the system works.

    No amount of documentation can fix an unmaintainable system. A really good system needs minimal documentation. Too much documentation means more of it will get outdated sooner, and not be fixed, and more of it will be simply wrong.

    amen
    brazzy:

    On the other hand, it's certainly possible to make a system so general and with so many layers of abstraction that it becomes unmaintainable as well.

    I agree wholeheartedly
    brazzy:

    To be maintainable, a system or API needs first and foremost be be simple. Then you don't need much documentation. Abstractions that remove complexity are good (unless you need the more complex features and cannot bypass the abstraction). Abstractions that add complexity are bad.

    well put!

    I think one big problem about the documentation debate that happens between developers is that different developers take the general term "documentation" to mean different things.

    One form of documentation should describe a system, it's subsystems, and how it connects and interacts with other systems.

    Another form of documentation should describe the hack that had to be put in place to make the language/OS/device/etc. work.

    The code itself should be a form of documentation by describing what it is doing through variable and method names.

    A differnt form of documentation can describe the different levels of abstraction to help map new developers to important components much quicker.

    These are all forms of "documentation", but aren't the same thing. There are other forms of documentation, but this post is already too long. Not all programs need each type of documentation.

    The types and detail of documentation needed in an application is as specific to the application as its design. Maintaining the documentation is just as important as maintaing the code.
  • foppy 2007-10-09 14:00
    Wow, training and experience are needed to run a successful project. Who'd a thunk it? The problem is twofold. Everyone thinks they're an expert in whatever job you assign them. Secondly, 90% of projects are filled up with people who solely are in to grab as much money as they can. The more consultants you add in, the quicker the money is bled with little regret of a failing project. Once the money is gone, they'll just move on to the next chump.
  • OneMHz 2007-10-09 14:05
    Computers only do exactly what someone tells them to (be they user, programmer, or hardware engineer). Saying "it's not the technology or the tools, it's the people" is kind of redundant. Though, being redundant doesn't make it wrong.
  • Mark 2007-10-09 14:08
    foppy:
    Wow, training and experience are needed to run a successful project. Who'd a thunk it? The problem is twofold. Everyone thinks they're an expert in whatever job you assign them. Secondly, 90% of projects are filled up with people who solely are in to grab as much money as they can. The more consultants you add in, the quicker the money is bled with little regret of a failing project. Once the money is gone, they'll just move on to the next chump.


    In large part, I agree with your characterization of consultants because most consultants come from the school of 'information hiding'. They rarely go into it with their client's best interest at heart. Their real goal is billable hours and therefore, they can actually become barriers to success.

    When I was a consultant, I occassionally interfaced with folks from Andersen Consulting (or Arther Andersen or whatever they were called) and they were just the worst at this. I would characterize them as a bunch of c*** blockers who did everything they could to stretch projects out.
  • felix 2007-10-09 14:13
    Maintainable software begins at the highest level -- the Enterprise Architecture


    Oh, please. Most of the worst stories on this site have the word "Enterprise" in the title. Designing every application to be enterprisey right from the start is a recipe for disaster. Too much planning is just as worse as none at all. Planning too far ahead ends up smack against that cold hard thingie called "reality". The trick is to reconsider the scope of the project at every stage, and have the guts to react appropriately. A complete refactoring around 10000 LOC is manageable. At 100000, I'm not so sure anymore. At one million... yeah, I'm kidding.

    In other words do plan, and do what's appropriate for the project. Just don't rush to do the most impressive thing you can think of.
  • Scott 2007-10-09 14:15
    real_aardvark:

    I'm continually amused by this agiley claim that "documentation gets outdated and [won't] be fixed and will [largely] be wrong." Ho ho, eh? You'd never say this about code, of course. Oh no. You can never have too much code.

    And it never gets outdated and won't be fixed and won't [largely] be wrong.


    I don't understand what you are trying to say. When an agilist says that the docs will become outdated, this means with respect to how the application actually works (the code). How can the code be outdated with respect to the code?
  • emurphy 2007-10-09 14:16
    JohnFromTroy:
    Excellent essay.

    The rule-o-three thing is worthy of coder's scripture.


    http://c2.com/cgi/wiki?RuleOfThree
  • Badger 2007-10-09 14:25
    Michael:
    gabba:
    I'm confused as to why "REWRITE LARGE SWATHS OF CODE" is considered an essential step on the way to software success. Rewriting code is often necessary, but why is it put on the same level as fixing bugs and testing?
    The first writing is almost always a throw away, you should expect it and accept it, it will be significantly better the second time around. Once you build the code once, you have learned 90% of what was wrong with the design. A large rewrite at that point will pay off exponentially down the road.

    I'm currently in 3.5 years into a large software project, and we're nearing the end of the "REWRITE LARGE SWATHS OF CODE" part. Our path followed this diagram. I feel warm and fuzzy, we're nearly there.
  • SRM 2007-10-09 14:37
    Interestingly neither the article nor any of the commenents mention software project risk management[*]. Either risk management was non-existent, or the management decided to ignore all classical warning signs of an impending project failure.

    [*] Well, kind of obvious given this is the WTF site...
  • Ollie Jones 2007-10-09 14:44
    Info World has a pretty good writeup )3-21-2005) of this disastrous VCF software project, for which the prime contractor apparently was SAIC.

    http://www.infoworld.com/article/05/03/21/12FEfbi_1.html

    Before laughing our faces off at these guys, I think we should remember that the heinous events of Sept 11, 2001 and the ensuing FBI shakeup came right in the middle of this contract.

    I lament that the Sept 11 attacks were so successful. Maybe this $200M is one of the losses caused, at least partly, by the attacks.
  • Franz Kafka 2007-10-09 15:04
    gabba:
    I'm confused as to why "REWRITE LARGE SWATHS OF CODE" is considered an essential step on the way to software success. Rewriting code is often necessary, but why is it put on the same level as fixing bugs and testing?


    Because this is a flowchart for an awful, disfunctional system.
  • ARandomGuy 2007-10-09 15:09
    JohnFromTroy:
    Excellent essay.

    The rule-o-three thing is worthy of coder's scripture.


    And it actually does show up in the GoF patterns book... (A fact that seems to be lost on all sorts of people who mistake design for jamming every pattern in the book into the code.)

    CAPTCHA: muhahaha
  • Franz Kafka 2007-10-09 15:17
    Ollie Jones:
    Info World has a pretty good writeup )3-21-2005) of this disastrous VCF software project, for which the prime contractor apparently was SAIC.

    http://www.infoworld.com/article/05/03/21/12FEfbi_1.html

    Before laughing our faces off at these guys, I think we should remember that the heinous events of Sept 11, 2001 and the ensuing FBI shakeup came right in the middle of this contract.

    I lament that the Sept 11 attacks were so successful. Maybe this $200M is one of the losses caused, at least partly, by the attacks.


    Then we can proceed laughing uproariously at them. 9/11 isn't an excuse to screw the pooch! 9/11 happened, now get on with your life. If this were a real war, with an enemy that actually threatened our security and you used a successful attack as an excuse to screw around for two years , you'd be thrown in a box.

    Seriously, WTF are you thinking?
  • Richard 2007-10-09 15:26
    Tim Smith:
    LOL, it isn't just the government.


    It isn't just commercial software, either. Lots of FOSS suffers from the same Big Ball of Mud designs.
  • brazzy 2007-10-09 15:26
    Mark:
    When I was a consultant, I occassionally interfaced with folks from Andersen Consulting (or Arther Andersen or whatever they were called) and they were just the worst at this. I would characterize them as a bunch of c*** blockers who did everything they could to stretch projects out.

    Arthur Andersen was a financial auditing and accounting company that disbanded after the Enron disaster, and Andersen Consulting was its IT/Management consulting spinoff that was renamed to Accenture in 2001.

    And yes, they have a very bad reputation for doing anything to increase their share of a project - but pretty much all IT big consulting companies have the same reputation, and well-earned. It's the smaller companies and the freelancers that can supply quality and solve problems rather than prolonging them - if you find the right ones. They won't bloat a project in order to put in 20 more of their people because they don't *have* 20 more people on short notice. Of course that's not a good thing when you actually *need* 20 more people, which is why the big companies get hired.
  • XNeo 2007-10-09 15:27
    Well, VCF failed and got scrapped by the government. That's where our german government is fundamentally different: Here, huge failures in software development (A2LL for example) just go in production anyways...
  • AuMatar 2007-10-09 15:35
    Robert S. Robbins:

    I am currently working on an application that loads the entire database schema into jagged arrays so you have no idea what column or table you may be dealing with in a query.


    I await reading your story here. Of course, since you seem to think this is a good idea, it will be posted by your coworkers.
  • will 2007-10-09 15:36
    Knew someone who worked on this, and the story from the developers is not the same a presented here.

    This project was originally one of theses huge SEI CMM projects. You deliver the software and it is tested against the list of all requirements, make sure you get under the number of faults allowed, fix those get the final version accepted and hope you win the contract for the next release.

    Development was going good, they were replacing the old system and adding a bunch of new capability. Then 9/11 happened and the FBI decided to change everything around.

    So SAIC, the company developing this software, got the FBI to switch to a more angilish system where they would deliver some basic features get those tested start getting that deployed and continue. They did that came out with the first version and the FBI went and tested it as if it was the final version.

  • benny 2007-10-09 15:40
    yay. a factual retelling of a well-known, bloated, mismanaged gov't project. Hardly a wtf. What's next? Therac-25?
  • LBL 2007-10-09 15:43
    Repeat it with me: Agile... Software... Development...
  • different anon 2007-10-09 15:46
    Franz Kafka:

    Then we can proceed laughing uproariously at them. 9/11 isn't an excuse to screw the pooch! 9/11 happened, now get on with your life. If this were a real war, with an enemy that actually threatened our security and you used a successful attack as an excuse to screw around for two years , you'd be thrown in a box.

    Seriously, WTF are you thinking?


    Next time RTFA.

    FTA:
    "Only a few months after the ink was dry on these contracts, the Sept. 11 tragedy struck, reshaping the mission of the FBI. No longer would the Bureau be concerned merely with law enforcement. Instead, to protect against terrorism on U.S. soil, the FBI needed to get into the intelligence business.

    This shift turned the requirements for UAC inside out. Instead of beautifying old mainframe apps, the charter changed to replacing those applications with a new, collaborative environment for gathering, sharing, and analyzing evidence and intelligence data. "

    Also FTA:
    At the February 2005 hearing, Mueller said that the FBI delivered "finalized" requirements for VCF in June 2002, which included integrating the functionality of the five original ACS applications with the new system. But according to Hughes, the changes kept coming at a rate of more than one per day.

    Also, from Wikipedia:
    FBI Directors during the development of VCF (since 2000):
    Louis J. Freeh 1993–2001
    Thomas J. Pickard 2001 (Acting director)
    Robert S. Mueller III 2001–present

    You're right, the new requirements wanted by a bureaucracy in turmoil guided by multiple directors has no connection to a project's failure! That's not a set of moving targets! </sarcasm>
  • ssanchez 2007-10-09 16:11
    "Most expensive failed project", you have got to be kidding, it doesn't even come close. Have a look at the UK's National Health Service current IT project:

    http://search.bbc.co.uk/cgi-bin/search/results.pl?scope=all&edition=d&q=NHS+IT+system&go=Search

    To quote:
    The system is set to cost £6.8bn extra over 10 years.

    In addition, when the training and local implementation is taken into account, the figure rises to over £12bn.


    That's right, 6.8 BILLION - wait for it - POUNDS STERLING. That's a cool USD 13.6bn, more than it cost MS to produce Vista, including the marketing. And no-one in the UK seems in the slightest bit bothered that its destined for failure.

    CAPTCHA (speaking of failed IT ventures): atari

  • BigDigDug 2007-10-09 16:29
    nobody:
    I too used to give construction industry counter-examples to poor software engineering. Having gone through the construction process of a few homes now I no longer make that mistake. It's not that they are worse than software efforts, it's that they aren't much better.

    I hope construction of larger buildings, bridges, dams, etc. are done far better. But I know there are examples that show that this is not always so.

    I know one great example: Boston's Big Dig.

    Some highlights include the builders using deficient concrete and 2-ton concrete tiles that were GLUED to the roof of tunnels.

    Just like many IT projects, ultimately, it was an expensive pointless project to replace a working system with a new, flashy system that would solve all the problems of the old system because it was NEW.

    (It replaced a working highway with a new tunnel, because the raised highway was "ugly".)

    However, unlike most IT failures, this one did kill someone when one of the glued tiles became unglued.
  • Beau "Porpus" Wilkinson 2007-10-09 16:43
    Very large software projects seem to be doomed to failure. This has been shown statistically, using case studies, using anecdotes like this one, etc., ad nauseam.

    In light of that fact, one might reasonably ask why we don't just look for opportunities to address needs using smaller systems. In the case of VCF, I suspect that the users in the field could have suggested small improvements or minimal new systems that would make their jobs easier in specific ways.

    The problem is that executives have to think big, and develop big "visions," to be recognized by our business culture. So we get "enterprise" apps that are supposed to revolutionize the way the users work... and they always seem to involve BizTalk, and/or Citrix, and/or bloated Ajax libraries, and whatever else looks good on paper to executives (who often can't even check e-mail).

    It's really depressing... it results in absurd instructions like "don't worry about fixing that invoice generator, we need you on the LARS project."

    (Yeah, Mr. Cook, I'm talking about you)
  • Corporate Cog 2007-10-09 16:56
    It's a sad fact that both the current system I maintain and the last one with a former employer are both EOL. The latter is being rewritten from scratch (mostly by the folks who brought us the VCF). When the announcement came that it was to be rewritten, most of my coworkers were kind of shocked. Yes, it barely worked (like my current system), but at what cost? Just as Alex described would have taken place had the VCF system gone into production.
    Now my bad luck is following me and the subsystem of the system that I work on is going to be outsourced. And again, most of my coworkers are surprised.

    It seems most of the people I work with just expect software systems to be a big ball of mud and that there is no choice but to thrash against said ball indefinitely.
  • Fedaykin 2007-10-09 17:00
    Good article, but I disagree with the proposed lifetime of 15 years for an enterprise application. Perhaps there are some environments where that makes sense, but there are certainly some where it simply does not.

    Think about it. If any large corporation was still using a system developed in ~1992, it would most likely be a DOS based terminal application running on a proprietary network solution. While it would hopefully be a solid application, it would utterly fail to deliver the functionality necessary for the business using it to exist in the modern world.

    I think ~5 years is a more realistic target for enterprise apps. The apps I develop I generally only consider viable for about 3 years, but that's partly because the customers I develop for have needs that change on a yearly and sometimes quarterly basis. No app can be designed to handle that much change over more than a few years and not collapse under its own weight (unless a great deal of effort (read:money) is spent developing it).
  • Sin Tax 2007-10-09 17:18
    Expensive failed projects - that's not a real WTF.

    No, the real WTF is a project like the one I'm attached to. A public, underfunded customer who can't really afford the solution necessary or desired. Eternal fighting over price, deliveries, eternal discussion about whether to upgrade the obsolete and out-of-support J2EE/Portal product it is built on, a production environment that was put together by half-clueless developers (who were particularly clueless about building a production environment for high availability and maintainability), which is now so fragile, that the appservers need to be kicked every night. Management trying to fix problems by throwing more hardware or more people at them.

    The *REAL* WTF? This system is intended to be a core element of a national strategy. It is the poster case for how to do such a thing right. It has won countless awards, nationally and internationally. It actually manages to be marginally useful as well, yet everyone technical who has been clued a bit about its internals will agree that it's rotten to the core.

    Capthcha: Paint. Yeah, people don't care about quality. They care about shiny color.

    Sin Tax
  • Richard Sargent 2007-10-09 17:23
    Fedaykin:
    Think about it. If any large corporation was still using a system developed in ~1992, it would most likely be a DOS based terminal application running on a proprietary network solution. While it would hopefully be a solid application, it would utterly fail to deliver the functionality necessary for the business using it to exist in the modern world.



    bzzzt.

    Go back to school. One part of the system I work with was developed 35-40 years ago. No one knows and no one is willing to bet on how many more years it will be around.

    Years ago I corresponded with a gentleman from England who injured himself laughing at the puppy who thought all systems should be replaced every five years. (Well, he didn't really hurt himself. <s>) He figured portions of his system would still be running when the 2037 time problem surfaces.
  • Mark 2007-10-09 18:25
    As I see it, life-span is a business strategy consideration. IT is a service and so it has to adapt to the overall business strategy.

    I worked for a company that was very aggressive in terms of growth and entering new markets. Once the trigger was pulled, requirements gathering, development, testing and roll out would proceed at a breakneck speed. When I first started there it was a huge culture shock because I would see all these failure points being built into the system - but the business strategy was to enter the market as quickly as possible and deal with the clean up afterward. Even if the clean up eventually meant retooling the eventual system.

    The thinking from executive row was twofold.

    First, there's a pile of money there waiting to be made so don't let IT considerations slow you down - ever.

    Second, it's possible that we won't even want to stay in this market, so let's get a serviceable system in place, test the water and if we want to stay, we'll start fixing stuff.

    That was a real eye-opener for me because I really started to understand that in some situations, what we traditionally think of as solid SDLC practices can run counter to business strategy.

    I would never want to go back to working for that company because it was overly chaotic. But they are very successful with offices in four or five countries now. So it's hard to fault their approach.
  • Mystery 2007-10-09 18:44
    VCF wasn't "worse than failure." It was just a plain failure.
  • ICanHaveProperGrammar 2007-10-09 19:01
    Fedaykin:
    If any large corporation was still using a system developed in ~1992, it would most likely be a DOS based terminal application running on a proprietary network solution. While it would hopefully be a solid application, it would utterly fail to deliver the functionality necessary for the business using it to exist in the modern world.


    You have to be kidding! The MegaCorp(tm) I work for has 3 core systems that work together to handle 60% of their turnover (around a trillian US dollars annually). Of these
    1) System 1 is 25 years old
    2) System 2 is 20 years old
    3) System 3 is an attempt to replace some of the user interfaces into 1 & 2. It's (badly designed) GUI instead of (well designed) greenscreen and scheduled to be replaced before the other two are, simply because it's slowed the rate of data entry by 90%.

    I'm not saying it's easy wedging weekly changes into these apps, but if you really think that your banks, insurers, big oil, telecoms, power, water, gas, or government organisations aren't utterly reliant on "Enterprise" applications over a decade old, and failing miserably to find more recent apps that actually work better for them, then you are living in a fantasy world.

    Our latest "replacement and improvement" project delivered a system that is significantly slower and less functional than the 15 year old system that is being removed.

    Believe me, if you went round the top 100 companies (excluding big IT) in the world and blocked any Enterprise system over 10 years old, you'd be burning your neighbours house for light and warmth, and fighting your pets for their food inside a month.
  • Franz Kafka 2007-10-09 19:07
    BigDigDug:

    I know one great example: Boston's Big Dig.

    Some highlights include the builders using deficient concrete and 2-ton concrete tiles that were GLUED to the roof of tunnels.


    Epoxy is not the same as elmer's glue - just because it's glue doesn't make it bad. Hell, spaceship one is composite and glue all over.
  • Franz Kafka 2007-10-09 19:10
    different anon:
    Franz Kafka:

    Then we can proceed laughing uproariously at them. 9/11 isn't an excuse to screw the pooch! 9/11 happened, now get on with your life. If this were a real war, with an enemy that actually threatened our security and you used a successful attack as an excuse to screw around for two years , you'd be thrown in a box.

    Seriously, WTF are you thinking?


    Next time RTFA.



    That doesn't excuse the mess. shit happens and you deal with it. You'd think the FBI would be able to do a better job than they did, but no. And really, 9/11 didn't reshape anything. All it did was underscore the need for beter cooperation among agencies, subject to very good laws limiting that cooperation.
  • real_aardvark 2007-10-10 06:23
    Scott:
    real_aardvark:

    I'm continually amused by this agiley claim that "documentation gets outdated and [won't] be fixed and will [largely] be wrong." Ho ho, eh? You'd never say this about code, of course. Oh no. You can never have too much code.

    And it never gets outdated and won't be fixed and won't [largely] be wrong.


    I don't understand what you are trying to say. When an agilist says that the docs will become outdated, this means with respect to how the application actually works (the code). How can the code be outdated with respect to the code?

    OK, I'll be much more obvious.

    Programmers (agile or otherwise) are fond of emphasising the code at the expense of the documentation. I contend that this is because a goodly proportion of programmers are illiterates with no communication skills to speak of. (And a further large proportion do not write English as a first language, which further complicates things.)

    The result is that programmers, on the whole, hate writing and maintaining documentation. Therefore they do not do it. And, since we're all casuists of the first order, we need an evasive explanation of why we don't do things that we don't like. Thus, "documentation is unnecessary/gets out-of-date/was eaten by my dog."

    Niklaus Wirth believes that Algorithms + Data Structures = Programs. For all but the most trivial examples, I believe that Programs + Documentation = Usable System.

    All I was trying to say was that, if you can maintain the code, you can also maintain the documentation. The full semantics of a system are never contained solely in the code.

    How, for example, do you use the code to explain to future programmers why you've made a particular binary choice? Say you've chosen to use a hashmap rather than a RB-style map. Take one of these two possibilities:

    (1) Hashmap chosen for efficiency reasons. Yes, let's put that as a comment every time we use the hashmap. Far preferable to a single line in a design document.
    (2) Hashmap chosen because of obscure bug in RB map library. This should also be documented. What happens if a future release of the library fixes the bug?

    Things that do not get implemented in the code cannot be "documented" in the code. There are rather a lot of these in the average large-scale project.

    Don't be lazy. Don't be pitiful. Document fully. Documentation is a first-class citizen in the programming world, just as is coding, testing, and source control: and for precisely the same reason -- maintenance.
  • proko 2007-10-10 06:32
    "After all, few in the construction industry would deem a two-year-old house with a leaky roof and a shifting foundation anything but a disaster"
    That is not correct. In country(Estonia) where I come from, new houses get guaranty for 2 years. So because of that and because of uses of unqualified personal for building, new construction usually start leaking from roof much before the deadline( 2 years ). And builders consider that normal :D
  • c 2007-10-10 06:54
    there is a line missing on the diagram, from Outsource to India. The cynic would say it should go straight to failure, otherwise it should go back to the start.
  • brazzy 2007-10-10 07:26
    BigDigDug:

    I know one great example: Boston's Big Dig.

    Some highlights include the builders using deficient concrete and 2-ton concrete tiles that were GLUED to the roof of tunnels.

    Just like many IT projects, ultimately, it was an expensive pointless project to replace a working system with a new, flashy system that would solve all the problems of the old system because it was NEW.

    (It replaced a working highway with a new tunnel, because the raised highway was "ugly".)

    Bullshit. The "working highway" was badly constructed in terms of making the traffic flow quickly and thus suffered from constant congestion.

    Additionally, the project included the rerouting of the traffic to and from Boston's main airport through a tunnel that bypasses the downtown area.

    The project may have been badly planned and executed, but it was far from pointless.
  • Raedwald 2007-10-10 08:02
    gabba:
    Rewriting code is often necessary, but why is it put on the same level as fixing bugs and testing?


    If that means refactoring, yes, it is just as important as fixing bugs and testing. Consider: your can always do more testing and fix more bugs. Therefore, if something really is less important than testing and fixing bugs, the only rational allocation of resources is not to do that thing at all; that is, use all the resources you would have allocated to it for testing and bug-fixing.

    It might be claimed that you can not refactor the code because you need all those resources to fix important bugs. But if you are overwhelmed by important bugs, that suggests the design or specification of your code is faulty, which are problems no amount of testing and bug-fixing will solve.
  • Raedwald 2007-10-10 08:12
    Mark:
    I think something that is often overlooked in the cycle of failure that plagues large organizations is culture


    That came home to me in a previous job. The company culture was to have a can do attitude; the company was very proud of that attitude, and senior managers often spoke approvingly of it. And who can blame them, what a good culture to have, right? No jobs-worths and time-servers about.

    But that asset had a downside: the company was incapable of dealing with negative information and bad news. Consequently, the specifications for projects were appalling, because nobody could stand up and say we can not produce this product because its specification is inadequate.
  • misha 2007-10-10 08:25
    Michael:
    gabba:
    I'm confused as to why "REWRITE LARGE SWATHS OF CODE" is considered an essential step on the way to software success. Rewriting code is often necessary, but why is it put on the same level as fixing bugs and testing?
    The first writing is almost always a throw away, you should expect it and accept it, it will be significantly better the second time around. Once you build the code once, you have learned 90% of what was wrong with the design. A large rewrite at that point will pay off exponentially down the road.


    Plan to throw one away; you will, anyhow
    -- F. Brooks
  • ParkinT 2007-10-10 08:34
    I love the photo you chose, Alex.
    It reminds me of "The Adams Family" tv series.
  • KenW 2007-10-10 09:04
    Grumpy:
    When did I subscribe for a 2-page essay?
    Too many words ...


    So ask for a refund of your subscription price, moron.
  • Codeville 2007-10-10 09:19
    I was just thinking yesterday about how this makes developers feel at different stages of the process:

    http://blog.codeville.net/2007/10/09/version-101/
  • different anon 2007-10-10 09:27
    Franz Kafka:

    That doesn't excuse the mess. shit happens and you deal with it.


    "Deal with it" means dealing with the moving targets... how, exactly? The government didn't stop adding demands and the programmers couldn't say "go fuck yourselves." Or was there another bureaucracy willing to foot the bill?

    How would <i>you</i> have done it without losing your funding?
  • Jack 2007-10-10 09:58
    Discussing failed software always reminds me of the Anna Karenina principle: "[Successful projects] are all alike; every [failed project] is [a failure] in its own way." (http://en.wikipedia.org/wiki/Anna_Karenina#Plot_summary)

    The diagram shows that happy path. I believe any of us could describe new failure paths from our experience and that there would be no limit and no duplication to such paths.

    That's not to say, of course, that there aren't generalizations: insufficient testing, ignoring/misinterpreting requirements, overwhelming time constraints, etc. But each of these has a different root cause in each failed project.
  • Mark 2007-10-10 11:01
    I know exactly what you are saying because I've been in that exact situation as well (at a different company). Only I did stand up and call the specification crap (diplomatically, of course) and I was rewarded by being branded as an obstacle thrower.

    I feel I am adequately sensitive to over-design. But requirements that are nothing more 'the system must work good-like' don't really give the developer anything to work from. So what happened was that there was a lot of back-channel communication with the developer calling a SME and building stuff straight into the app with on-the-fly requirements.

    You can probably guess how that ended.
  • Fedaykin 2007-10-10 12:21
    Yo should learn to read. I specifically said there were circumstances where software with a long lifespan made sense.

    Don't be a jackass.
  • BigDigDug 2007-10-10 12:24
    brazzy:
    Bullshit. The "working highway" was badly constructed in terms of making the traffic flow quickly and thus suffered from constant congestion.

    Additionally, the project included the rerouting of the traffic to and from Boston's main airport through a tunnel that bypasses the downtown area.

    The project may have been badly planned and executed, but it was far from pointless.

    Oh please, the whole tunnel thing was pointless. There are these two technologies that are simpler than tunnels and that have worked for thousands of years: plain old roads and bridges.

    Throwing the highway underground for aesthetic reasons is a perfect analogy to people using "enterprise" technologies when simpler technologies would work better and cheaper.

    And to whoever complained that the tiles weren't "Elmer's glued but epoxied": 1. I never said "Elmer's glue" and 2. "glue" and "epoxy" are synonyms.
  • Joe 2007-10-10 13:26
    BigDigDug:
    I know one great example: Boston's Big Dig.

    Some highlights include the builders using deficient concrete and 2-ton concrete tiles that were GLUED to the roof of tunnels.

    Just like many IT projects, ultimately, it was an expensive pointless project to replace a working system with a new, flashy system that would solve all the problems of the old system because it was NEW.

    (It replaced a working highway with a new tunnel, because the raised highway was "ugly".)

    However, unlike most IT failures, this one did kill someone when one of the glued tiles became unglued.


    I drive through that tunnel everyday. Shortly after that tile fell on the poor woman and killed her, you could see people looking up at the ceiling as they drove through the tunnel. Apparently, rear ending someone at 50 MPH is preferable to having a 2 ton concrete tile come unglued and fall on you. Pick your poison.

    Oh, and for those of you outside Massachusetts, don't feel like you're absolved of it. When the project began incurring billions of dollars of cost overruns, the Feds bailed us out. Yup. Farmers in Wichita are having their income taken from them to pay for our clusterf*ck.

    Sorry folks.

    Captcha: doom - it's like the captcha bot knows what we're talking about. Shhh!
  • Coward 2007-10-10 13:44
    Hmmm....those failures sound a little bit like working at True.com...

  • Todd 2007-10-10 15:19
    I did a case study on VCF a while back. While looking through the mounds of information of it, I found the IEEE had a real good write up on the details. You can read it at: http://www.spectrum.ieee.org/sep05/1455
  • different anon 2007-10-10 15:25
    BigDigDug:
    brazzy:

    The project may have been badly planned and executed, but it was far from pointless.

    Oh please, the whole tunnel thing was pointless. There are these two technologies that are simpler than tunnels and that have worked for thousands of years: plain old roads and bridges.

    Was there the space to add another glut of bridges and roads downtown and to the airport? Part of Boston's insanity is the lack of available space above ground.
  • EricS 2007-10-10 15:30
    http://en.wikipedia.org/wiki/The_Mythical_Man-Month

    What was true then is true now.
  • TC 2007-10-10 15:34
    The software projects I have worked on that have been 'failures' are predominately due to business/management practices and culture. Occasionally Ive come across incompetent coding but it is not the worst contributor to failure of the projects Ive been involved in.

    Generally, businesses want minimal time and minimal cost spent on a project but don't realize (or care) about the cost to quality. The only people who are aware of software development complexity - and therefore understand the time/cost/quality relationship - are developers or former developers. Try to explain to a non-coder the unit tests, patters, workarounds, bugs, dependencies (to name but a few) that contribute to a screen that appears simplistic to the users. They just don't get it.

    On the technical side I think the tools available to developers are still quite immature and lack the kind of integration that help eliminate (through automation) common problems that drag down the software development process. For example, I'm convinced of the worth of unit testing however at present the time it takes to write good unit tests explains why many developers avoid them, even with the benefits they bring further into the life cycle of a project.
  • blindman 2007-10-10 16:02
    "No one could have predicted that the software would have changed in the manner that it did." Bull$h1t. Scalability should be anticipated in every application. Only a complete noob would assume they fully understand the clients requirements, or the even the client fully understands their requirements. Duh.
  • Mark 2007-10-10 16:33
    TC:
    The software projects I have worked on that have been 'failures' are predominately due to business/management practices and culture.


    Bingo.

    I once saw a BRD that had a bunch of really complex specifications but one of the last 'requirements' listed was 'the system must be launched by July 1.' It was late May when the BRD was released. So I was like 'July 1 of what year?'
  • Franz Kafka 2007-10-10 17:28
    different anon:
    Franz Kafka:

    That doesn't excuse the mess. shit happens and you deal with it.


    "Deal with it" means dealing with the moving targets... how, exactly? The government didn't stop adding demands and the programmers couldn't say "go fuck yourselves." Or was there another bureaucracy willing to foot the bill?

    How would <i>you</i> have done it without losing your funding?


    That comment was directed at the FBI. I'd have taken the project managers that were harrassing the contractors out to the woodshed, so to speak, and replaced them if they didn't stop screwing up.
  • nobody 2007-10-10 17:36
    blindman:
    "No one could have predicted that the software would have changed in the manner that it did." Bull$h1t. Scalability should be anticipated in every application. Only a complete noob would assume they fully understand the clients requirements, or the even the client fully understands their requirements. Duh.

    Uh, scalability is an architectural quality describing the efficiency and limit to which a system can take advantage of additional resources (computational, storage, network, etc.) to handle greater or lesser workloads. Few systems have arbitrary scalability and with degrees of scalability come costs charged against other qualities (response time, throughput (for given set of resources), complexity, etc.) and therefore trade-offs need to be made.

    Scalability does not mitigate the obvious fact that one rarely knows all of the requirements nor that fact that requirements will (almost always) change as a system of any significant complexity is built. It has nothing to do with that. The process you need to employ is Requirements Management. It is a continuous process and is named appropriately - it is not simply Requirements Gathering or Requirements Documentation. Things will change - always.

    Oh, and as another noted, Risk Management is essential in managing any significant project. Sometimes this means knowing when to pull the plug because the circumstances no longer allow the project to succeed. If you start designing and aircraft carrier and requirements change so that you now need a submarine, it's best to stop, re-group and approach the new requirements as a new project without the baggage of what you've done so far. OK, in real life it is seldom so black-and-white, but it is in circumstances like those experience by VCF that management, architects and other leaders earn their keep.
  • joe 2007-10-10 22:50
    Here's SAIC's (the government contractor) side of the story:

    http://www.saic.com/cover-archive/law/trilogy.html

  • Synonymous Awkward 2007-10-11 03:57
    It's just been decided that a notoriously large and muddy application(it eats souls) where I work is to be replaced in its entirety. I've never heard a meeting room full of overworked programmers cheer at being given extra work before.

    As far as documentation goes, I always liked Lisp's docstring approach. Having the documentation as part of the code helps me concentrate that little extra on keeping it up-to-date; with a well-written program (not that I ever write those) you might even be able to use docstrings as part of your user-help system, which I suppose would be another incentive.
  • Paul B 2007-10-11 04:32
    Can we get a high resolution version of the code to ruin diagram? I'd like to print it out and stick it on the wall.
  • The Frinton Mafia 2007-10-11 05:34

    Compared to the gigantic disasters that the UK's NHS is capable of, the US security services are actually fairly small and responsible organizations.

    http://burningourmoney.blogspot.com/2006/03/latest-on-nhs-computer-disaster.html

  • real_aardvark 2007-10-11 06:38
    Synonymous Awkward:
    It's just been decided that a notoriously large and muddy application(it eats souls) where I work is to be replaced in its entirety. I've never heard a meeting room full of overworked programmers cheer at being given extra work before.

    As far as documentation goes, I always liked Lisp's docstring approach. Having the documentation as part of the code helps me concentrate that little extra on keeping it up-to-date; with a well-written program (not that I ever write those) you might even be able to use docstrings as part of your user-help system, which I suppose would be another incentive.

    Lisp docstring, Javadoc, and other manifestations of literate programming are arguably better than nothing.

    Equally, they are arguably worse than nothing.

    Simply put, there is no one-to-one relationship between any part of the code and any part of the documentation. Nor is there a one-to-many, many-to-one, or many-to-many relationship. Nor is it possible to impose an arbitrary mapping such as "take this API entry, format it as X for the design, Y for the help page, Z for the user guide." Yes, you'll get "documentation," but the signal-to-noise ratio will render it unusable.

    The only viable alternative is to learn to read and write. Third-graders can do it, so why not programmers?

    If you want the system you're working on to last the proverbial fifteen years or more, you really do need to put some effort into producing documentation and keeping it up to date.
  • blindman 2007-10-11 09:16
    nobody:
    blindman:
    "No one could have predicted that the software would have changed in the manner that it did." Bull$h1t. Scalability should be anticipated in every application. Only a complete noob would assume they fully understand the clients requirements, or the even the client fully understands their requirements. Duh.

    Uh, scalability is an architectural quality describing the efficiency and limit to which a system can take advantage of additional resources (computational, storage, network, etc.) to handle greater or lesser workloads.
    Uh, as a database designer, I can tell you that scalability absolutely includes the ability to expand upon functionality without having to scrap the current design. If your definition does not include this, that speaks volumes about the quality of the applications you develop.
  • wgc 2007-10-11 09:41
    ammoQ:
    Anyway "nearly a million lines of code" doesn't seem very much to me. I wonder how they managed to spend 200M for that quantity of code.


    No kidding! Where can I get paid #200/loc?
  • real_aardvark 2007-10-11 09:43
    blindman:
    nobody:
    blindman:
    "No one could have predicted that the software would have changed in the manner that it did." Bull$h1t. Scalability should be anticipated in every application. Only a complete noob would assume they fully understand the clients requirements, or the even the client fully understands their requirements. Duh.

    Uh, scalability is an architectural quality describing the efficiency and limit to which a system can take advantage of additional resources (computational, storage, network, etc.) to handle greater or lesser workloads.
    Uh, as a database designer, I can tell you that scalability absolutely includes the ability to expand upon functionality without having to scrap the current design. If your definition does not include this, that speaks volumes about the quality of the applications you develop.

    Hmmmm.
    Lewis Carroll:

    "--and that shows that there are three hundred and sixty-four days when you might get un-birthday presents--"

    "Certainly," said Alice.

    "And only one for birthday presents, you know. There's glory for you!"

    "I don't know what you mean by 'glory,'" Alice said.

    Humpty Dumpty smiled contemptuously. "Of course you don't--till I tell you. I meant, 'there's a nice knock-down argument for you!'"

    "But 'glory' doesn't mean 'a nice knock-down argument,'" Alice objected.

    "When I use a word," Humpty Dumpty said, in a rather scornful tone, "it means just what I choose it to mean--neither more nor less."

    As a legendary egg, I refute the conventional Database Designer definition of "scalability." 99% of household germs, or software engineers as Humpty would have it, use "scalability" to mean precisely what "nobody" says it means.

    I think the word you're looking for is "extensibility," and if your definition of that doesn't include "the ability to check before you pontificate," then that speaks volumes ...
  • wgc 2007-10-11 10:00
    Oh please, the whole tunnel thing was pointless. There are these two technologies that are simpler than tunnels and that have worked for thousands of years: plain old roads and bridges.

    Throwing the highway underground for aesthetic reasons is a perfect analogy to people using "enterprise" technologies when simpler technologies would work better and cheaper.


    Never been there, huh? The project may have been mismanaged, delayed, have cost-overruns, and a few well-publicized goofs, but it's a huge success. As someone who used to commute through that, I'm convinced it would save almost an hour a day of sitting in traffic. Multiply that by however many tens of thousands of cars doing the same thing and you get a huge improvement. I no longer do that commute but it saves at least 20 minutes every time I go to the airport. This project is a success for most of its users (if not the people who paid for it).

    I used to work in a 35 story building that was literally feet away from the old elevated highway: maybe room for a sidewalk between them. How do you enlarge that road? How do you deal with all the cross-roads and connections? One of the biggest problems with the old road was too many old ramps with no room for merge lanes or exit lanes: where can you put those? Another issue with the old road was too sharp of bends, with buildings in the way: how do you straighten those? Building vertically gives more room for extra lanes, ramps, and cross-streets. Building underground gives room for those plus straightening the road without moving buildings. Which would have been more expensive: building underground or moving buildings?
  • Hognoxious 2007-10-11 10:06
    Raedwald:
    That came home to me in a previous job. The company culture was to have a can do attitude
    Yeah, I heard of a guy who had that. Icarus, I think that was his name.

    http://despair.com/delusions.html
  • nobody 2007-10-11 10:52
    blindman:
    nobody:
    blindman:
    "No one could have predicted that the software would have changed in the manner that it did." Bull$h1t. Scalability should be anticipated in every application. Only a complete noob would assume they fully understand the clients requirements, or the even the client fully understands their requirements. Duh.

    Uh, scalability is an architectural quality describing the efficiency and limit to which a system can take advantage of additional resources (computational, storage, network, etc.) to handle greater or lesser workloads.
    Uh, as a database designer, I can tell you that scalability absolutely includes the ability to expand upon functionality without having to scrap the current design. If your definition does not include this, that speaks volumes about the quality of the applications you develop.

    What? It is not my definition - it is the accepted INDUSTRY definition. What you describe is generally known as flexibility, extensibility or maintainability - all important architectural qualities as well. In fact, these are at times MORE IMPORTANT than extensive scalability.

    Perhaps you should become more familiar with the definition of industry terms.
    http://en.wikipedia.org/wiki/Scalability
    http://www.pcmag.com/encyclopedia_term/0,2542,t=scalable&i=50835,00.asp
    http://datageekgal.blogspot.com/2007/04/10g-performance-and-tuning-guide-first.html
    (the latter thrown in for you database guys)

    I suppose one can think of scaling of the problem space (the ability to handle additional requirements) but that is an unusual usage of the term. Not wrong I suppose, just unusual. The problem with the unusual usage is that it is likely to create confusion among others as they will (most likely) be thinking about workload handling and not requirements handling.

    But perhaps you work in a community where that definition is common. I am not familiar with any such community.
  • nobody 2007-10-11 11:05
    joe:
    Here's SAIC's (the government contractor) side of the story:

    http://www.saic.com/cover-archive/law/trilogy.html


    A very interesting read. I've seen a few large projects where the client has insisted on a "flash cut-over". These did not go well. For complex, interconnected and "mission critical" systems, anything other than a risk-averse, incremental roll-out is a VERY BAD IDEA.

    When I have not walked away from a project where the client continued to insist upon such a plan when the architects and project managers insist it is ill-advised I have ALWAYS regretted it. Clients get stupid ideas. Professionals tell them the truth and don't let them go on adventures that will end in failure. Or at least they don't help them fail.

    Unfortunately for clients and customers, our industry does is not a profession.
  • blindman 2007-10-11 11:06
    nobody:
    blindman:
    nobody:
    blindman:
    "No one could have predicted that the software would have changed in the manner that it did." Bull$h1t. Scalability should be anticipated in every application. Only a complete noob would assume they fully understand the clients requirements, or the even the client fully understands their requirements. Duh.

    Uh, scalability is an architectural quality describing the efficiency and limit to which a system can take advantage of additional resources (computational, storage, network, etc.) to handle greater or lesser workloads.
    Uh, as a database designer, I can tell you that scalability absolutely includes the ability to expand upon functionality without having to scrap the current design. If your definition does not include this, that speaks volumes about the quality of the applications you develop.

    What? It is not my definition - it is the accepted INDUSTRY definition. What you describe is generally known as flexibility, extensibility or maintainability - all important architectural qualities as well. In fact, these are at times MORE IMPORTANT than extensive scalability.

    Perhaps you should become more familiar with the definition of industry terms.
    http://en.wikipedia.org/wiki/Scalability
    http://www.pcmag.com/encyclopedia_term/0,2542,t=scalable&i=50835,00.asp
    http://datageekgal.blogspot.com/2007/04/10g-performance-and-tuning-guide-first.html
    (the latter thrown in for you database guys)

    I suppose one can think of scaling of the problem space (the ability to handle additional requirements) but that is an unusual usage of the term. Not wrong I suppose, just unusual. The problem with the unusual usage is that it is likely to create confusion among others as they will (most likely) be thinking about workload handling and not requirements handling.

    But perhaps you work in a community where that definition is common. I am not familiar with any such community.
    Then you should really get out more. Database designers frequently speak of making a database scalable to multiple sites, multiple clients, multiple products, multiple forms, etc...
    When a a client comes to me and says they have only one location, I have the foresight to ask about the possibility that at some point in the future they may have more than one location, and design the application to handle that. If you do not, then your limited definition of scalability is doing your clients a disservice.
  • BigDigDug 2007-10-11 11:08
    wgc:
    I used to work in a 35 story building that was literally feet away from the old elevated highway: maybe room for a sidewalk between them. How do you enlarge that road? How do you deal with all the cross-roads and connections? One of the biggest problems with the old road was too many old ramps with no room for merge lanes or exit lanes: where can you put those? Another issue with the old road was too sharp of bends, with buildings in the way: how do you straighten those? Building vertically gives more room for extra lanes, ramps, and cross-streets. Building underground gives room for those plus straightening the road without moving buildings. Which would have been more expensive: building underground or moving buildings?

    Planning ahead in the first place, before the buildings were allowed to be built? There's no conceivable way that it should be cheaper completely replace a highway with a tunnel than to expand a highway unless the city planners were incompetent at a level that is truly worse than failure.

    Not to mention that there's always eminent domain, which would probably have been cheaper than tunneling. Tear down the buildings that should never have been allowed to be created in the first place and do it RIGHT this time.

    Besides, what's going to happen when you need to expand the tunnel? Or is Boston's population shrinking?

    Boston has created yet another scenario where it's impossible to expand, failing to learn the "scalability" requirement that caused the entire mess in the first place.
  • nobody 2007-10-11 11:20
    blindman:
    Then you should really get out more. Database designers frequently speak of making a database scalable to multiple sites, multiple clients, multiple products, multiple forms, etc...
    When a a client comes to me and says they have only one location, I have the foresight to ask about the possibility that at some point in the future they may have more than one location, and design the application to handle that. If you do not, then your limited definition of scalability is doing your clients a disservice.

    I can imagine multiple locations having an impact on scalability requirements (industry standard definition), response time requirements (added network delays) and a host of possible functional requirements. Duh.

    Anyone who does not thoroughly elicit and analyze requirements (functional and architectural) and likely change cases will tend to end up designed into a corner. Sure, you can refactor your way out of some corners, but architectural limitations typically don't lend themselves to that. You can architect a system to flex in ways that allow you to cover reasonable changing or "found" requirements - in fact doing otherwise is a ticket to failure.

    I find it amusing that you feel the need to imply that I don't know what I am doing. I do.
  • Some Atheist 2007-10-11 12:40

    Enough with the sermons.

    Even if you define "WTF" as Worse Than Failure, this wasn't even a WTF, it was just a simple F. Since you can't seem to distinguish between a WTF and a F then I'll do you a favor and give you a free U, as in:

    F. U.

    CAPTCHA: ewww (my thoughts exactly)
  • real_aardvark 2007-10-11 12:57
    blindman:
    Then you should really get out more. Database designers frequently speak of making a database scalable to multiple sites, multiple clients, multiple products, multiple forms, etc...
    When a a client comes to me and says they have only one location, I have the foresight to ask about the possibility that at some point in the future they may have more than one location, and design the application to handle that. If you do not, then your limited definition of scalability is doing your clients a disservice.

    Only if the sole adverb that you recognise is "scalability."

    Multiple sites: scalability.
    Multiple clients: scalability.
    Multiple products: I don't know what the heck this means, but in all probability extensibility, not scalability.
    Multiple forms: extensibility.
    "etc": unknowable at this time.

    I suppose the last three would (in a very minor way) fall partially under "scalability" if you have to juggle around with extents and the like, but I thought that Oracle did all that for you, these days .

    There's enough namespace pollution in computer terminology already, without needlessly adding to it and obfuscating a perfectly useful and well-understood term.

    BigDigDug:
    Planning ahead in the first place, before the buildings were allowed to be built? There's no conceivable way that it should be cheaper completely replace a highway with a tunnel than to expand a highway unless the city planners were incompetent at a level that is truly worse than failure.

    Not to mention that there's always eminent domain, which would probably have been cheaper than tunneling. Tear down the buildings that should never have been allowed to be created in the first place and do it RIGHT this time.

    Besides, what's going to happen when you need to expand the tunnel? Or is Boston's population shrinking?

    Boston has created yet another scenario where it's impossible to expand, failing to learn the "scalability" requirement that caused the entire mess in the first place.

    I think the obvious solution would have been to hold off building Boston until around 1950, when the little local difficulty with motorised traffic started to become apparent.

    Damn those eighteenth century town planners ...

    The exercise of eminent domain in the case of Faneuil Hall and the like might prove a mite contentious outside the rather narrow viewpoint of the Al Qaeda school of civic rectitude.
  • nobody 2007-10-11 13:06
    Some Atheist:

    Enough with the sermons.

    Even if you define "WTF" as Worse Than Failure, this wasn't even a WTF, it was just a simple F. Since you can't seem to distinguish between a WTF and a F then I'll do you a favor and give you a free U, as in:

    F. U.

    Quite the intellect, aren't we?
  • Jackal von ÖRF 2007-10-11 16:34
    Alex Papadimoulis:
    "Avoid premature generalization," Haack advises. "Don't build the system to predict every change. Make it resilient to change."

    As for knowing when to generalize, Haack lives by the rule of three: "The first time you notice something that might repeat, don't generalize it. The second time the situation occurs, develop in a similar fashion -- possibly even copy/paste -- but don't generalize yet. On the third time, look to generalize the approach."

    In the case of Haack's Web application, this approach would have motivated the team to begin generalizing the working code with each new iteration. Over time, the changes might have prevented "ancient" parts of the code from becoming an unmanageable burden.

    ...

    Business analysts and software developers must be trained to understand that the software they build can and will change in ways they'll never be able to predict. Therefore, tools such as refactoring and design patterns should be an integral part of the development process.

    Testing and quality assurance also need to be resilient to change. When the software changes, so should the unit tests, smoke tests and regression tests. Without adequate testing early and throughout, defects from the constant changes will pile on and become even more difficult to resolve.


    Isn't this just what agile methods are all about, making it easier to adapt to change? A few months ago Alex was bashing agile (http://worsethanfailure.com/Articles/The-Great-Pyramid-of-Agile.aspx), but has he now changed his opinion?

    As for myself, I've experienced TDD and related methods to be good for producing high quality code. Some of the other agile practices, especially how the requirements are gathered from the users by gatherin user stories which are right away written as code, I'm slightly doubtful about. Not because they would not work, but because I know better methods (which also happen to be iterative and test-driven) for designing systems which do what the user needs (not what the users says he would like to have) and which produce user interfaces with high utility, efficiency and learnability.

    Addendum (2007-10-11 17:03):
    PS: I think it would be good to have a new category for "serious articles" (such as this and The Mythical Business Layer and the older ones), so that it would be easy to find them afterwards from amongst all the "traditional WTF articles".
  • Chrordata 2007-10-13 21:28
    We all know that building designs are not changed lightly: to change a building after the foundations are laid is to invite expense and risk failure. We all know software designs are changed lightly, causing expense and failure. Time to stop pretending that large codebases can change; If a customer provides a new set of requirements, that equals a new product. New code, from scratch.

    This way, the customer gets the real price, up front. Frequent change after the project has started have transparent costs. Changed requirements are still possible, but to do so requires a new contract. New contract means a new delivery date and a new price. The old code is thrown away.

    It's the sort of thing which would need serious laws to enforce. We need the respect and responsibilities of architects. (insert random real profession with professional liability and enforced accreditation).

    However: we current operate like the less respectable carnival sideshow operators. And some have to take orders from clowns.

    Upside of scary grown-up system: we could deliver working products. And feel good about ourselves again.
  • Nano 2007-10-14 02:31
    The problem: The team decided to keep it simple: no database, no special configuration and no extensibility.

    The solution: Perhaps add a database, special configurations and extensibility? No! Instead, use Haacks Rule of Three: just hack in kludges, preferably with a straight copy/paste until the architecture starts groaning under the weight then undo the last few months work for a complete rewrite just before it collapses. Can't beat the Microsoft Way.
  • aubin 2007-10-14 18:27
    Amen to that! My friends and I have been advocating architect/engineer-like licensing for software professionals for years now - and we're only ~25 - because we know we'd be able to get our licenses, and the people who turn this industry into a joke (in college we had *professors* who wrote worse code than we did) would have to find something else to do. just please, please if this ever happens, make it required for the "PHB" to have his license, too...

    captcha: sanitarium - see, even the captcha bot wants us to clean up the software landscape!
  • James 2007-10-16 09:39
    Reading all of this is pretty funny to me. This project failed because of massive supplies of idiocy.

    Now, the notion that people without formal training cannot do projects like this ... oh, that got me laughing. The largest idiots I've known in my 30+ years of development have had college degrees.

    This project and most others that fail miserably like it fail primarily because these companies fail to do one important thing: hire and foster high-quality talent.

    Another huge issue is the failure to break such massive projects down into managable subsystems. It's now en vogue, but I've done it for 20 years. It works great.

    Scalability: always plan on it. There's always some joker. Just plan on it.

    And ... please, have some of these college-educated moronic managers read The Mythical Man-Month. You can't just toss more consultants at a project and get it on time. In fact, that's often the reverse.

    A while back I was on a project that got bloated and horrible. Management kept adding people. Finally, management said that they were going to trash the project after a couple years of development.

    I went to the head boss and said told him that if we kept the core and talented folks, canned the rest, and started over ... that we could be up-and-running within 60 days. He took the risk, cut the team from 20 down to 4, and kicked serious code ass.

    Documentation: we did architectural documentation and used automatic systems (such as those now found in VS.NET) to manage the documentation of functions, classes and modules. Much of the code was self-documenting as we were brutal with each other about naming ... if something wasn't clear, we made the developer responsible change it.

    So, let's put the blame for this large failures where it belongs: with the idiots who shouldn't be hired for this work in the first place.
  • dotnetgeek 2007-10-16 13:00
    Is there a larger image for "The code to ruin". I want to print a poster and put it in my cube.
  • sweavo 2007-10-17 08:15
    JohnFromTroy:
    Excellent essay.

    The rule-o-three thing is worthy of coder's scripture.


    I concur. I am close to having screwed some of my deadlines by generalising too early, spending time on a wonderful infrastructure that is at present scarcely used.
  • fa 2008-03-18 14:23
    Sin Tax:
    Expensive failed projects - that's not a real WTF.

    No, the real WTF is a project like the one I'm attached to. A public, underfunded customer who can't really afford the solution necessary or desired. Eternal fighting over price, deliveries, eternal dissssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssscussion about whether to upgrade the obsolete and out-of-support J2EE/Portal product it is built on, a production environment that was put together by half-clueless developers (who were particularly clueless about building a production environment for high availability and maintainability), which is now so fragile, that the appservers need to be kicked every night. Management trying to fix problems by throwing more hardware or more people at them.

    The *REAL* WTF? This system is intended to be a core element of a national strategy. It is the poster case for how to do such a thing right. It has won countless awards, nationally and internationally. It actually manages to be marginally useful as well, yet everyone technical who has been clued a bit about its internals will agree that it's rotten to the core.

    Capthcha: Paint. Yeah, people don't care about quality. They care about shiny color.

    Sin Tax
  • Dave 2009-06-25 06:53
    Richard Sargent:

    As for knowing when to generalize, Haack lives by the rule of three: "The first time you notice something that might repeat, don't generalize it. The second time the situation occurs, develop in a similar fashion -- possibly even copy/paste -- but don't generalize yet. On the third time, look to generalize the approach."


    P.J.Plauger published an article many years ago. I think it was in Software Development magazine, sometime in 1989. The article was about what he called the "0, 1, infinity" rule.

    In essence, once the pattern goes multiple, it is time to design accordingly. Switch from the singular pattern into a multiple pattern and be ready for all future enhancements of the same nature. They will come. Build it. :-)


    That's a very different version of the 0, 1, Infinity rule than I've seen before.

    The version I'm familiar with says that in a system, for any particular thing, you should allow either no instances (that is, it is prohibited), exactly one instance (that is, an exception to the prohibition), or infinite examples (or, at least, limited only by system resources).

    The purpose to this is to avoid placing arbitrary limits on things that actually are just arbitrary. You wouldn't want to have, for instance, a mail client that placed a purely arbitrary limit on the length of folder names, or the number of folders, or the depth of folder nesting, because it would be frustrating under some circumstances. Thus, you should avoid writing software that does that kind of thing.
  • ELIZA 2010-02-06 13:47
    The Frinton Mafia:

    Compared to the gigantic disasters that the UK's NHS is capable of, the US security services are actually fairly small and responsible organizations.

    http://burningourmoney.blogspot.com/2006/03/latest-on-nhs-computer-disaster.html


    That is in turn nothing compared to the disaster that IS the US health insurance system, at any rate from the standpoint of the good of the nation and that of her people. Instead of one component as Britain has, it has three major and a few minor components:
    1) Medicare, which with the exception of Part D works very well (estimated efficiency of .96 to .98 with complete socialisation of coverage, private insurors refuse to compete with it on a financially and actuarially fair basis) (Part D is (for some reason, possibly Cargo Cult Economics of some sort) handled through private insurors and prevented from bargaining with the pharmaceutical companies over drug prices*).
    2) Employer-based Insurance in which all workers are offered an identical insurance package (because of taxable income deductions for such), which is coming apart at the seams (intracorporate socialisation of coverage is complete but efficiency is low (.7 to .8), the lack of government health coverage is driving businesses with foreign competitors under, and insurors can refuse to cover lifesaving medicine (certain justices' interpretations of ERISA: even if you sue them and win, they only have to pay the cost of the denied medicine)).
    3) Individual Insurance, which is a disaster on all fronts (near-complete actuarialisation, low efficiency (.7 to .8), soaring costs (under the latest "reform" bill, it could cost, for a middle-class family of four with an income of $50k, more than a sixth of the family's income to obtain the mandatory minimum coverage), and some insurors have been known to use various methods to avoid paying for medical procedures, up to and including retroactively stripping people of their coverage).
    Minor parts include the Veterans Administration, which includes a mini-NHS for veterans (for its funding, it is almost certainly THE most efficient insuror in the US, due to its ability to provide its patients NHS-style long-term coverage and use NHS-style bargaining over pharmaceutical prices); Medicaid, for poor people (it is now very possible, and for an estimated forty-five to fifty million Americans very the case, to be too rich to qualify for Medicaid but too poor to afford private insurance, which is why S-CHIP exists; being partly state-funded, it is likely collapsing because of the current economic crisis and its effects on the state treasuries); S-CHIP, similar to Medicaid but for children in the gap between Medicaid and being able to afford private insurance; Health Management Organisations, private analogues of the NHS but treatable as ordinary private insurors; and the proposed "Medicare buy-in", which would have allowed people to buy Government insurance (like Canadians; a Canadian middle-class family of four is reported to pay $108 in British Columbia for a basic plan (cf http://dneiwert.blogspot.com/2007/02/go-ahead-and-die.html; it varies by province) v the $8.5k estimated for basic private insurance in the US AFTER reform) and given Medicare a larger subscription base for oligopsony bargaining.
    The resulting mess of organisations is estimated to spend TWO AND A HALF times as much as Britain on healthcare** and still somehow gets care little better than Britain's (the comparison with the world leader in healthcare, France, is worse, as France spends half as much as the US on healthcare), and an estimated twenty-two thousand die each year for lack of insurance (http://dneiwert.blogspot.com/2009/07/tommy-douglas-canadas-answer-to-abe.html).
    * Pharmaceutical patents themselves are nowadays of questionable value; I would replace them with prizes for invention were I a government or, failing that, use eminent domain to make them public for a reasonable price. Process Patents, for a SPECIFIC process by which a drug is synthesised, I might perhaps allow to stand.
    ** The NHS is notoriously underfunded; cf the Amateur Transplants' NHS Song.
  • LED display 2010-02-09 04:22
    <A HREF="http://www.ledtv.asia">LED display</A>
    <A HREF="http://www.ledtv.asia">LED Signs</A>
    <A HREF="http://www.ledtv.asia">LED Message display</A>
    <A HREF="http://www.ledtv.asia">LED Message Signs</A>
    <A HREF="http://www.ledtv.asia">LED board</A>
    <A HREF="http://www.ledtv.asia">LED curtain display</A>
    <A HREF="http://www.ledtv.asia">LED Soft curtain</A>
    <A HREF="http://www.ledtv.asia">LED soft display</A>
    <A HREF="http://www.ledsigns.cn">LED display</A>
    <A HREF="http://www.ledsigns.cn">LED Signs</A>
    <A HREF="http://www.ledsigns.cn">LED message display</A>
    <A HREF="http://www.ledsigns.cn">LED outdoor display</A>
    <A HREF="http://www.ledsigns.cn">LED fullcolor display</A>
    <A HREF="http://www.ledsigns.cn">LED board</A>
    <A HREF="http://www.ledsigns.cn">LED message signs</A>
    <A HREF="http://www.ledsigns.cn">LED panel</A>
  • LED display 2010-02-09 04:23
    <A HREF="http://www.ledtv.asia">LED display</A>
    <A HREF="http://www.ledtv.asia">LED Signs</A>
    <A HREF="http://www.ledtv.asia">LED Message display</A>
    <A HREF="http://www.ledtv.asia">LED Message Signs</A>
    <A HREF="http://www.ledtv.asia">LED board</A>
    <A HREF="http://www.ledtv.asia">LED curtain display</A>
    <A HREF="http://www.ledtv.asia">LED Soft curtain</A>
    <A HREF="http://www.ledtv.asia">LED soft display</A>
    <A HREF="http://www.ledsigns.cn">LED display</A>
    <A HREF="http://www.ledsigns.cn">LED Signs</A>
    <A HREF="http://www.ledsigns.cn">LED message display</A>
    <A HREF="http://www.ledsigns.cn">LED outdoor display</A>
    <A HREF="http://www.ledsigns.cn">LED fullcolor display</A>
    <A HREF="http://www.ledsigns.cn">LED board</A>
    <A HREF="http://www.ledsigns.cn">LED message signs</A>
    <A HREF="http://www.ledsigns.cn">LED panel</A>
  • Alex 2010-08-04 21:49
    [quote] I don't understand what you are trying to say. When an agilist says that the docs will become outdated, this means with respect to how the application actually works (the code). How can the code be outdated with respect to the code?[/quote][/quote]

    This has already been answered, by I'd like to add my 2 cents.

    Actually, it's pretty easy for code to get outdated. Of course its not outdated with respect to the code, but it becomes easily outdated with respect to the reasons for the code being there. Seen that a lot.

    E.g., code circumvents some obscure bug in library X, but is 10x slower than it should be. Obscure bug is fixed, code never gets removed/switched back to the original code. You need to document things like that! Filing a bug in your bug tracker for it, could be a way of doing that. I would even go as far as filing that bug and putting a comment in the code, that points to it. Maybe even link that bug to the bug report you created upstream. Then again, I've seen programmers, which can't even read a comment two lines above the line of code they are reading and adding a comment, asking about the reasons for doing something in the code, that were explained just two lines above ... go figure.
  • cindy 2010-12-16 02:48
    find for all kinds of amazing watches and women handbags

    http://replica038.com/