Comment On Avoiding Development Disasters

As most development managers know, the FBI's Virtual Case File (VCF) system has become the epitome of the software industry's most expensive failed project. Running taxpayers between $100 and $200 million dollars over four years, the VCF delivered little more than a mountain of useless documentation, nearly a million lines of code that will never run in production and a whole lot of costly lessons. Worse still, the lessons offered from this multi-million dollar failure could have just as easily been found in a $50 software engineering textbook. In fact, the major factors behind VCF's failure read much like such a book's table of contents: Enterprise Architecture: VCF had none. Management: Developers were both poorly managed and micromanaged. Skilled Personnel: Managers and engineers with no formal training were placed in critical roles. Requirements: They were constantly being changed. Scope Creep: New features were added even after the project was behind schedule. Steady Team: More people were constantly added to the team in an attempt to speed the project. [expand full text]
« PrevPage 1 | Page 2 | Page 3Next »

Re: Avoiding Development Disasters

2007-10-11 11:06 • by blindman (unregistered)
156752 in reply to 156750
nobody:
blindman:
nobody:
blindman:
"No one could have predicted that the software would have changed in the manner that it did." Bull$h1t. Scalability should be anticipated in every application. Only a complete noob would assume they fully understand the clients requirements, or the even the client fully understands their requirements. Duh.

Uh, scalability is an architectural quality describing the efficiency and limit to which a system can take advantage of additional resources (computational, storage, network, etc.) to handle greater or lesser workloads.
Uh, as a database designer, I can tell you that scalability absolutely includes the ability to expand upon functionality without having to scrap the current design. If your definition does not include this, that speaks volumes about the quality of the applications you develop.

What? It is not my definition - it is the accepted INDUSTRY definition. What you describe is generally known as flexibility, extensibility or maintainability - all important architectural qualities as well. In fact, these are at times MORE IMPORTANT than extensive scalability.

Perhaps you should become more familiar with the definition of industry terms.
http://en.wikipedia.org/wiki/Scalability
http://www.pcmag.com/encyclopedia_term/0,2542,t=scalable&i=50835,00.asp
http://datageekgal.blogspot.com/2007/04/10g-performance-and-tuning-guide-first.html
(the latter thrown in for you database guys)

I suppose one can think of scaling of the problem space (the ability to handle additional requirements) but that is an unusual usage of the term. Not wrong I suppose, just unusual. The problem with the unusual usage is that it is likely to create confusion among others as they will (most likely) be thinking about workload handling and not requirements handling.

But perhaps you work in a community where that definition is common. I am not familiar with any such community.
Then you should really get out more. Database designers frequently speak of making a database scalable to multiple sites, multiple clients, multiple products, multiple forms, etc...
When a a client comes to me and says they have only one location, I have the foresight to ask about the possibility that at some point in the future they may have more than one location, and design the application to handle that. If you do not, then your limited definition of scalability is doing your clients a disservice.

Re: Avoiding Development Disasters

2007-10-11 11:08 • by BigDigDug (unregistered)
156753 in reply to 156743
wgc:
I used to work in a 35 story building that was literally feet away from the old elevated highway: maybe room for a sidewalk between them. How do you enlarge that road? How do you deal with all the cross-roads and connections? One of the biggest problems with the old road was too many old ramps with no room for merge lanes or exit lanes: where can you put those? Another issue with the old road was too sharp of bends, with buildings in the way: how do you straighten those? Building vertically gives more room for extra lanes, ramps, and cross-streets. Building underground gives room for those plus straightening the road without moving buildings. Which would have been more expensive: building underground or moving buildings?

Planning ahead in the first place, before the buildings were allowed to be built? There's no conceivable way that it should be cheaper completely replace a highway with a tunnel than to expand a highway unless the city planners were incompetent at a level that is truly worse than failure.

Not to mention that there's always eminent domain, which would probably have been cheaper than tunneling. Tear down the buildings that should never have been allowed to be created in the first place and do it RIGHT this time.

Besides, what's going to happen when you need to expand the tunnel? Or is Boston's population shrinking?

Boston has created yet another scenario where it's impossible to expand, failing to learn the "scalability" requirement that caused the entire mess in the first place.

Re: Avoiding Development Disasters

2007-10-11 11:20 • by nobody (unregistered)
156754 in reply to 156752
blindman:
Then you should really get out more. Database designers frequently speak of making a database scalable to multiple sites, multiple clients, multiple products, multiple forms, etc...
When a a client comes to me and says they have only one location, I have the foresight to ask about the possibility that at some point in the future they may have more than one location, and design the application to handle that. If you do not, then your limited definition of scalability is doing your clients a disservice.

I can imagine multiple locations having an impact on scalability requirements (industry standard definition), response time requirements (added network delays) and a host of possible functional requirements. Duh.

Anyone who does not thoroughly elicit and analyze requirements (functional and architectural) and likely change cases will tend to end up designed into a corner. Sure, you can refactor your way out of some corners, but architectural limitations typically don't lend themselves to that. You can architect a system to flex in ways that allow you to cover reasonable changing or "found" requirements - in fact doing otherwise is a ticket to failure.

I find it amusing that you feel the need to imply that I don't know what I am doing. I do.

Re: Avoiding Development Disasters

2007-10-11 12:40 • by Some Atheist (unregistered)

Enough with the sermons.

Even if you define "WTF" as Worse Than Failure, this wasn't even a WTF, it was just a simple F. Since you can't seem to distinguish between a WTF and a F then I'll do you a favor and give you a free U, as in:

F. U.

CAPTCHA: ewww (my thoughts exactly)

Re: Avoiding Development Disasters

2007-10-11 12:57 • by real_aardvark
156767 in reply to 156752
blindman:
Then you should really get out more. Database designers frequently speak of making a database scalable to multiple sites, multiple clients, multiple products, multiple forms, etc...
When a a client comes to me and says they have only one location, I have the foresight to ask about the possibility that at some point in the future they may have more than one location, and design the application to handle that. If you do not, then your limited definition of scalability is doing your clients a disservice.

Only if the sole adverb that you recognise is "scalability."

Multiple sites: scalability.
Multiple clients: scalability.
Multiple products: I don't know what the heck this means, but in all probability extensibility, not scalability.
Multiple forms: extensibility.
"etc": unknowable at this time.

I suppose the last three would (in a very minor way) fall partially under "scalability" if you have to juggle around with extents and the like, but I thought that Oracle did all that for you, these days .

There's enough namespace pollution in computer terminology already, without needlessly adding to it and obfuscating a perfectly useful and well-understood term.

BigDigDug:
Planning ahead in the first place, before the buildings were allowed to be built? There's no conceivable way that it should be cheaper completely replace a highway with a tunnel than to expand a highway unless the city planners were incompetent at a level that is truly worse than failure.

Not to mention that there's always eminent domain, which would probably have been cheaper than tunneling. Tear down the buildings that should never have been allowed to be created in the first place and do it RIGHT this time.

Besides, what's going to happen when you need to expand the tunnel? Or is Boston's population shrinking?

Boston has created yet another scenario where it's impossible to expand, failing to learn the "scalability" requirement that caused the entire mess in the first place.

I think the obvious solution would have been to hold off building Boston until around 1950, when the little local difficulty with motorised traffic started to become apparent.

Damn those eighteenth century town planners ...

The exercise of eminent domain in the case of Faneuil Hall and the like might prove a mite contentious outside the rather narrow viewpoint of the Al Qaeda school of civic rectitude.

Re: Avoiding Development Disasters

2007-10-11 13:06 • by nobody (unregistered)
156772 in reply to 156765
Some Atheist:

Enough with the sermons.

Even if you define "WTF" as Worse Than Failure, this wasn't even a WTF, it was just a simple F. Since you can't seem to distinguish between a WTF and a F then I'll do you a favor and give you a free U, as in:

F. U.

Quite the intellect, aren't we?

Re: Avoiding Development Disasters

2007-10-11 16:34 • by Jackal von ÖRF
Alex Papadimoulis:
"Avoid premature generalization," Haack advises. "Don't build the system to predict every change. Make it resilient to change."

As for knowing when to generalize, Haack lives by the rule of three: "The first time you notice something that might repeat, don't generalize it. The second time the situation occurs, develop in a similar fashion -- possibly even copy/paste -- but don't generalize yet. On the third time, look to generalize the approach."

In the case of Haack's Web application, this approach would have motivated the team to begin generalizing the working code with each new iteration. Over time, the changes might have prevented "ancient" parts of the code from becoming an unmanageable burden.

...

Business analysts and software developers must be trained to understand that the software they build can and will change in ways they'll never be able to predict. Therefore, tools such as refactoring and design patterns should be an integral part of the development process.

Testing and quality assurance also need to be resilient to change. When the software changes, so should the unit tests, smoke tests and regression tests. Without adequate testing early and throughout, defects from the constant changes will pile on and become even more difficult to resolve.


Isn't this just what agile methods are all about, making it easier to adapt to change? A few months ago Alex was bashing agile (http://worsethanfailure.com/Articles/The-Great-Pyramid-of-Agile.aspx), but has he now changed his opinion?

As for myself, I've experienced TDD and related methods to be good for producing high quality code. Some of the other agile practices, especially how the requirements are gathered from the users by gatherin user stories which are right away written as code, I'm slightly doubtful about. Not because they would not work, but because I know better methods (which also happen to be iterative and test-driven) for designing systems which do what the user needs (not what the users says he would like to have) and which produce user interfaces with high utility, efficiency and learnability.

Addendum (2007-10-11 17:03):
PS: I think it would be good to have a new category for "serious articles" (such as this and The Mythical Business Layer and the older ones), so that it would be easy to find them afterwards from amongst all the "traditional WTF articles".

Re: Avoiding Development Disasters

2007-10-13 21:28 • by Chrordata
We all know that building designs are not changed lightly: to change a building after the foundations are laid is to invite expense and risk failure. We all know software designs are changed lightly, causing expense and failure. Time to stop pretending that large codebases can change; If a customer provides a new set of requirements, that equals a new product. New code, from scratch.

This way, the customer gets the real price, up front. Frequent change after the project has started have transparent costs. Changed requirements are still possible, but to do so requires a new contract. New contract means a new delivery date and a new price. The old code is thrown away.

It's the sort of thing which would need serious laws to enforce. We need the respect and responsibilities of architects. (insert random real profession with professional liability and enforced accreditation).

However: we current operate like the less respectable carnival sideshow operators. And some have to take orders from clowns.

Upside of scary grown-up system: we could deliver working products. And feel good about ourselves again.

This was the WTF, right?

2007-10-14 02:31 • by Nano (unregistered)
The problem: The team decided to keep it simple: no database, no special configuration and no extensibility.

The solution: Perhaps add a database, special configurations and extensibility? No! Instead, use Haacks Rule of Three: just hack in kludges, preferably with a straight copy/paste until the architecture starts groaning under the weight then undo the last few months work for a complete rewrite just before it collapses. Can't beat the Microsoft Way.

Re: Avoiding Development Disasters

2007-10-14 18:27 • by aubin (unregistered)
157090 in reply to 157069
Amen to that! My friends and I have been advocating architect/engineer-like licensing for software professionals for years now - and we're only ~25 - because we know we'd be able to get our licenses, and the people who turn this industry into a joke (in college we had *professors* who wrote worse code than we did) would have to find something else to do. just please, please if this ever happens, make it required for the "PHB" to have his license, too...

captcha: sanitarium - see, even the captcha bot wants us to clean up the software landscape!

Re: Avoiding Development Disasters

2007-10-16 09:39 • by James (unregistered)
Reading all of this is pretty funny to me. This project failed because of massive supplies of idiocy.

Now, the notion that people without formal training cannot do projects like this ... oh, that got me laughing. The largest idiots I've known in my 30+ years of development have had college degrees.

This project and most others that fail miserably like it fail primarily because these companies fail to do one important thing: hire and foster high-quality talent.

Another huge issue is the failure to break such massive projects down into managable subsystems. It's now en vogue, but I've done it for 20 years. It works great.

Scalability: always plan on it. There's always some joker. Just plan on it.

And ... please, have some of these college-educated moronic managers read The Mythical Man-Month. You can't just toss more consultants at a project and get it on time. In fact, that's often the reverse.

A while back I was on a project that got bloated and horrible. Management kept adding people. Finally, management said that they were going to trash the project after a couple years of development.

I went to the head boss and said told him that if we kept the core and talented folks, canned the rest, and started over ... that we could be up-and-running within 60 days. He took the risk, cut the team from 20 down to 4, and kicked serious code ass.

Documentation: we did architectural documentation and used automatic systems (such as those now found in VS.NET) to manage the documentation of functions, classes and modules. Much of the code was self-documenting as we were brutal with each other about naming ... if something wasn't clear, we made the developer responsible change it.

So, let's put the blame for this large failures where it belongs: with the idiots who shouldn't be hired for this work in the first place.

Re: Avoiding Development Disasters

2007-10-16 13:00 • by dotnetgeek (unregistered)
Is there a larger image for "The code to ruin". I want to print a poster and put it in my cube.

Re: Avoiding Development Disasters

2007-10-17 08:15 • by sweavo (unregistered)
157533 in reply to 156483
JohnFromTroy:
Excellent essay.

The rule-o-three thing is worthy of coder's scripture.


I concur. I am close to having screwed some of my deadlines by generalising too early, spending time on a wonderful infrastructure that is at present scarcely used.

Re: Avoiding Development Disasters

2008-03-18 14:23 • by fa (unregistered)
184448 in reply to 156546
Sin Tax:
Expensive failed projects - that's not a real WTF.

No, the real WTF is a project like the one I'm attached to. A public, underfunded customer who can't really afford the solution necessary or desired. Eternal fighting over price, deliveries, eternal dissssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssssscussion about whether to upgrade the obsolete and out-of-support J2EE/Portal product it is built on, a production environment that was put together by half-clueless developers (who were particularly clueless about building a production environment for high availability and maintainability), which is now so fragile, that the appservers need to be kicked every night. Management trying to fix problems by throwing more hardware or more people at them.

The *REAL* WTF? This system is intended to be a core element of a national strategy. It is the poster case for how to do such a thing right. It has won countless awards, nationally and internationally. It actually manages to be marginally useful as well, yet everyone technical who has been clued a bit about its internals will agree that it's rotten to the core.

Capthcha: Paint. Yeah, people don't care about quality. They care about shiny color.

Sin Tax

Re: Avoiding Development Disasters

2009-06-25 06:53 • by Dave (unregistered)
271383 in reply to 156491
Richard Sargent:

As for knowing when to generalize, Haack lives by the rule of three: "The first time you notice something that might repeat, don't generalize it. The second time the situation occurs, develop in a similar fashion -- possibly even copy/paste -- but don't generalize yet. On the third time, look to generalize the approach."


P.J.Plauger published an article many years ago. I think it was in Software Development magazine, sometime in 1989. The article was about what he called the "0, 1, infinity" rule.

In essence, once the pattern goes multiple, it is time to design accordingly. Switch from the singular pattern into a multiple pattern and be ready for all future enhancements of the same nature. They will come. Build it. :-)


That's a very different version of the 0, 1, Infinity rule than I've seen before.

The version I'm familiar with says that in a system, for any particular thing, you should allow either no instances (that is, it is prohibited), exactly one instance (that is, an exception to the prohibition), or infinite examples (or, at least, limited only by system resources).

The purpose to this is to avoid placing arbitrary limits on things that actually are just arbitrary. You wouldn't want to have, for instance, a mail client that placed a purely arbitrary limit on the length of folder names, or the number of folders, or the depth of folder nesting, because it would be frustrating under some circumstances. Thus, you should avoid writing software that does that kind of thing.

Re: That isn't expensive, really

2010-02-06 13:47 • by ELIZA (unregistered)
298341 in reply to 156719
The Frinton Mafia:

Compared to the gigantic disasters that the UK's NHS is capable of, the US security services are actually fairly small and responsible organizations.

http://burningourmoney.blogspot.com/2006/03/latest-on-nhs-computer-disaster.html


That is in turn nothing compared to the disaster that IS the US health insurance system, at any rate from the standpoint of the good of the nation and that of her people. Instead of one component as Britain has, it has three major and a few minor components:
1) Medicare, which with the exception of Part D works very well (estimated efficiency of .96 to .98 with complete socialisation of coverage, private insurors refuse to compete with it on a financially and actuarially fair basis) (Part D is (for some reason, possibly Cargo Cult Economics of some sort) handled through private insurors and prevented from bargaining with the pharmaceutical companies over drug prices*).
2) Employer-based Insurance in which all workers are offered an identical insurance package (because of taxable income deductions for such), which is coming apart at the seams (intracorporate socialisation of coverage is complete but efficiency is low (.7 to .8), the lack of government health coverage is driving businesses with foreign competitors under, and insurors can refuse to cover lifesaving medicine (certain justices' interpretations of ERISA: even if you sue them and win, they only have to pay the cost of the denied medicine)).
3) Individual Insurance, which is a disaster on all fronts (near-complete actuarialisation, low efficiency (.7 to .8), soaring costs (under the latest "reform" bill, it could cost, for a middle-class family of four with an income of $50k, more than a sixth of the family's income to obtain the mandatory minimum coverage), and some insurors have been known to use various methods to avoid paying for medical procedures, up to and including retroactively stripping people of their coverage).
Minor parts include the Veterans Administration, which includes a mini-NHS for veterans (for its funding, it is almost certainly THE most efficient insuror in the US, due to its ability to provide its patients NHS-style long-term coverage and use NHS-style bargaining over pharmaceutical prices); Medicaid, for poor people (it is now very possible, and for an estimated forty-five to fifty million Americans very the case, to be too rich to qualify for Medicaid but too poor to afford private insurance, which is why S-CHIP exists; being partly state-funded, it is likely collapsing because of the current economic crisis and its effects on the state treasuries); S-CHIP, similar to Medicaid but for children in the gap between Medicaid and being able to afford private insurance; Health Management Organisations, private analogues of the NHS but treatable as ordinary private insurors; and the proposed "Medicare buy-in", which would have allowed people to buy Government insurance (like Canadians; a Canadian middle-class family of four is reported to pay $108 in British Columbia for a basic plan (cf http://dneiwert.blogspot.com/2007/02/go-ahead-and-die.html; it varies by province) v the $8.5k estimated for basic private insurance in the US AFTER reform) and given Medicare a larger subscription base for oligopsony bargaining.
The resulting mess of organisations is estimated to spend TWO AND A HALF times as much as Britain on healthcare** and still somehow gets care little better than Britain's (the comparison with the world leader in healthcare, France, is worse, as France spends half as much as the US on healthcare), and an estimated twenty-two thousand die each year for lack of insurance (http://dneiwert.blogspot.com/2009/07/tommy-douglas-canadas-answer-to-abe.html).
* Pharmaceutical patents themselves are nowadays of questionable value; I would replace them with prizes for invention were I a government or, failing that, use eminent domain to make them public for a reasonable price. Process Patents, for a SPECIFIC process by which a drug is synthesised, I might perhaps allow to stand.
** The NHS is notoriously underfunded; cf the Amateur Transplants' NHS Song.

Re: Avoiding Development Disasters

2010-02-09 04:22 • by LED display (unregistered)
<A HREF="http://www.ledtv.asia">LED display</A>
<A HREF="http://www.ledtv.asia">LED Signs</A>
<A HREF="http://www.ledtv.asia">LED Message display</A>
<A HREF="http://www.ledtv.asia">LED Message Signs</A>
<A HREF="http://www.ledtv.asia">LED board</A>
<A HREF="http://www.ledtv.asia">LED curtain display</A>
<A HREF="http://www.ledtv.asia">LED Soft curtain</A>
<A HREF="http://www.ledtv.asia">LED soft display</A>
<A HREF="http://www.ledsigns.cn">LED display</A>
<A HREF="http://www.ledsigns.cn">LED Signs</A>
<A HREF="http://www.ledsigns.cn">LED message display</A>
<A HREF="http://www.ledsigns.cn">LED outdoor display</A>
<A HREF="http://www.ledsigns.cn">LED fullcolor display</A>
<A HREF="http://www.ledsigns.cn">LED board</A>
<A HREF="http://www.ledsigns.cn">LED message signs</A>
<A HREF="http://www.ledsigns.cn">LED panel</A>

Re: Avoiding Development Disasters

2010-02-09 04:23 • by LED display (unregistered)
<A HREF="http://www.ledtv.asia">LED display</A>
<A HREF="http://www.ledtv.asia">LED Signs</A>
<A HREF="http://www.ledtv.asia">LED Message display</A>
<A HREF="http://www.ledtv.asia">LED Message Signs</A>
<A HREF="http://www.ledtv.asia">LED board</A>
<A HREF="http://www.ledtv.asia">LED curtain display</A>
<A HREF="http://www.ledtv.asia">LED Soft curtain</A>
<A HREF="http://www.ledtv.asia">LED soft display</A>
<A HREF="http://www.ledsigns.cn">LED display</A>
<A HREF="http://www.ledsigns.cn">LED Signs</A>
<A HREF="http://www.ledsigns.cn">LED message display</A>
<A HREF="http://www.ledsigns.cn">LED outdoor display</A>
<A HREF="http://www.ledsigns.cn">LED fullcolor display</A>
<A HREF="http://www.ledsigns.cn">LED board</A>
<A HREF="http://www.ledsigns.cn">LED message signs</A>
<A HREF="http://www.ledsigns.cn">LED panel</A>

Re: Avoiding Development Disasters

2010-08-04 21:49 • by Alex (unregistered)
316571 in reply to 156519
[quote] I don't understand what you are trying to say. When an agilist says that the docs will become outdated, this means with respect to how the application actually works (the code). How can the code be outdated with respect to the code?[/quote][/quote]

This has already been answered, by I'd like to add my 2 cents.

Actually, it's pretty easy for code to get outdated. Of course its not outdated with respect to the code, but it becomes easily outdated with respect to the reasons for the code being there. Seen that a lot.

E.g., code circumvents some obscure bug in library X, but is 10x slower than it should be. Obscure bug is fixed, code never gets removed/switched back to the original code. You need to document things like that! Filing a bug in your bug tracker for it, could be a way of doing that. I would even go as far as filing that bug and putting a comment in the code, that points to it. Maybe even link that bug to the bug report you created upstream. Then again, I've seen programmers, which can't even read a comment two lines above the line of code they are reading and adding a comment, asking about the reasons for doing something in the code, that were explained just two lines above ... go figure.

Re: Avoiding Development Disasters

2010-12-16 02:48 • by cindy (unregistered)
find for all kinds of amazing watches and women handbags

http://replica038.com/
« PrevPage 1 | Page 2 | Page 3Next »

Add Comment