- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
a 46% memory footprint reduction is not going to get you that far either. If you have to scale orders of magnitude, your best bet is to have code that can be made parallel -- and add lots of more machines. Yes, hardware can be an answer. If you style yourself an engineer, you use the best tool for the job. And that can be software, hardware, or both.
Of course, I don't know the code. But if Wilhelms' crack team of developers only focused on footprint, chances are they didn't invest in highly modular code. You want to refactor modular code, not obscure hand-optimized, inlined code. Premature optimization is the root of all evil.
Admin
Yay! Big step!
Now work on learning how to quote properly. Trim some of the excess text you're quoting before posting. It cuts down on the amount of cruft you have to wade through when reading. :-)
Admin
Hi there, I'm writing through a wormhole from Optimum BBS back in 1988, dial in on 629 969 0009, because we've got a thread going about some idiot who's just spent 5 months recoding the main application where I work on the VAX because it really should fit everything into 512kB of memory, shouldn't it?
He was unrolling loops, inlining functions, hand writing assembly code and everything, until this intern called up the Digital Equipment Corporation maintenance contractor and got him to upgrade the memory, and everything ran fine. But this bloke said it needed to be rewritten anyway, because any minicomputer application should fit in 512kB.
Anyway, this bloke started working on a PDP-8 with 6kB, and says he could do anything with that so why should he need any fancy compiled code.
To give him his due, he's got the memory use down lots, and it's all very nicely structured, but it has taken masses of time.
Anyway, it's all "plus ca change, plus c'est la meme chose", really. I don't think there's anything else I can add.
Admin
Pontoons. All you need are pontoons.
Admin
But this is the fundamental cause of enterprise software degrading as it ages: everyone knows the best solution is to apply a bandage now to stop the profuse bleeding, and then stitch up the wound so it actually heals properly. The problem is that "not bleeding" is a sufficient enough solution that, unless proper documentation and planning are done, most companies forget about the wound. In the end, you get a big, ugly scar because you didn't bother to deal with the problem (a gaping wound), just the symptom (profuse bleeding). Sadly, long-term fixes get marginalized and abandoned way too often, simply because the band-aid is (falsely) considered "good enough."
Admin
There is no such thing as instance methods that should be static.
Admin
Try 1K words of 12 bit magnetic core on a PDP-8e with an ASR-33 teletype with paper tape punch and reader.
(Insert appropriate Monty Python reference here)
Admin
Ahh, but there are. In a number of poorly designed systems I've seen personally, there are methods within a class that are instance methods, although they perform no function that is specific to any instance. For instance (no pun intended), if you have a DateTime class that defines a "getCurrentDateTimeAsString()" method that retrieves the system's current date and time, you would be best off defining that as a static method because the system's date and time should be the same across all instances, thereby not require instantiation to be properly functional. But I've see numerous situations where, for some reason apparently known only to the developer, a method like that was defined as an instance method.
Admin
When all you have is a hammer everything is a nail...
Admin
-Harrow.
Admin
3.5k!?! We'd have crawled through hot coals to get 3.5k. We had 1k on our ZX81 and we were damn proud of it
Admin
Wilhelm's mistake was leading off his presentation with the ram footprint reduction. As so many commenters here have pointed out, spending resources to make a running program smaller or faster is rarely justified today, unless you're Google.
He should have set aside his personal bias, and begun with a discussion of how rushing the development had led to crappy architecture decisions, inefficient design, resource leaks, etc. Then, when he finally mentioned less RAM or fewer CPU cycles, any remarks like Bob's would seem like unprofessional carping.
So TRWTF was not letting a geek make business decisions (he didn't, really) but allowing a zealot to structure a business case presentation.
-Harrow.
Admin
You are not alone .... I remember how incredibly happy I was when I got my first 16 KB from MultiTech for my '81.
As for my $0.02 about the OP:
It sounds as if the web app in question was to remain for a long time to come - let's say 4+ years. In that case the expense of re-factoring the implementation with five guys for five months (full-time ?) is justified because of that long period of time you will have a lot of ancillary benefits like (hopefully) a streamlined implementation, more effective use of resources, enhanced developer productivity and customer confidence in the product ... etc.
Having said that, being me in Wilhelm's shoes in any case I would have tried to get an memory server upgrade up to the max (2GB or 4GB) as a stopgap measure.
Admin
I think there are two WTFs in this article.
In regards to #1, are they only running one server? With what kind of hardware? Are they insane?! My company has a cluster and theres good reason to have one.
As for #2, I suppose he's never worked on a monolithic application, being an intern and all, and has no real idea of how bloody difficult it is to maintain one.
Admin
I agree with this. As I've said to others, my beef is the article implies that only the memory upgrade was necessary and not fixing the memory leaks.
Admin
Why would anyone take anything an intern says seriously? They usually have no clue as to the underlying details of a given problem.
Just thinking out loud...
Admin
Ever heard of paper?
Admin
You are right, but the WTF! is maybe that Wilhelm allowed the problem to persist for all the time he was spending on the optimizations and totally ignored the fast solution to add just a little extra memory.
In the end, your are right. Throwing more and more memory at the problem is not the right solution if there are -memory leaks-.
However when a server is equipped with as little as 512MB, just adding an extra GB or 2 really is the first thing you do. 2GB really is nothing these days. Heck, even 4GB is peanuts. This will buy you some time and costs you very little. At the very least, with more memory you can set a convenient memory threshold for an alarm to go off, after which you still have enough time so you can reboot your app outside peak business hours.
While that memory sits there you can analyze the cause of the problem. If there are leaks; fix them. If the code is invested with a sloppy resource usage pattern; step by step improve that.
However, if the application really needs the memory just add the memory it needs. The second WTF! is that Wilhelm was so obsessed with the rule of thumb "web applications shouldn't use more than 512MB" that he also ignored that some web applications in fact do need more. Trying to squeeze all the data that the application really needs in 512MB is good when for some reason you work on a platform that is limited to a maximum of 512MB. This was more or less how things worked in the C64 era. The maximum was 64KB and there were (almost) no options of expanding that.
However if upgrading your memory is a simple as installing an extra DIMM for a few bugs then you really should try that option first.
Admin
(Doesn't have time to read through 7 @#$%ing pages)
I have to agree with the re-engineering solution. Throwing more hardware at the problem would only be a temporary fix, if that. Eventually the leaking resources would likely still fill up the available memory or the application would grow and they'd face an exponentially bigger problem. So it was probably a good idea to re-engineer the application, but the fact that Wilhelm didn't know that it was a better idea than the hardware fix is still a WTF. ;D
It's hard to say without more information (and experience :P).
Admin
Well, there is a difference between fixing sloppy code that is too generously using resources and draconian measures to safe a few bytes.
Stuff like encoding Booleans into an integer, stop using buffers and caches altogether, abundant usage of static methods etc are not really an improvement to the architecture of a system. Yes, it saves some memory here and there, but that kind of code is WAY harder to maintain.
Like the article mentions, Wilhelm is from the C64 era. We could do wonders then with assembly, even doing crazy stuff like jumping between opcodes to squeeze every byte out of the system. -This- style of programming is not recommended anymore. Any trivial programming task will take at least 10 times longer (if not even longer than that) and maintenance in such a refactored system is harder, not easier.
I think many people here confuse the idea of refactoring sloppy code (which is always a good practice) with applying draconian 8 bit practices from the 80-ties to modern 32 or 64 bit computing platforms.
Admin
64KB? 16KB? 3.5KB?
Wimps. If you couldn't make it fit in the 256 bytes that came with the Altair, you weren't trying very hard.
IMSAI's "Kill the Lights" game fit in waaaay less than 256 bytes. My kids love that game.
Eventually of course, everything will get optimized down to one byte containing the number 42. :-)
Admin
Indeed, that's another major problem when applying draconian 8-bit style programming paradigms on a modern code base. The article doesn't mention it explicitly, but taking (extreme) measures to save a few bytes will simply cost you in time. If you save those bytes by encoding Booleans in an integer, it will cost you in CPU cycles when your code needs to mask out the correct bit. If you disable caching you will safe even more memory, but it will cost you in time when your code needs to make a round trip to the database, etc.
Again, a lot of the people commenting assume that the original code was overly sloppy and leaking resources.
Now just suppose that this wasn't the case, or at least not that extreme (no software is perfect after all). Suppose the app was relatively well written and simply needed a certain amount of data to be in memory.
Since Wilhelm obviously didn't invent the new perfect lossless compression algorithm, the server will eventually need more memory anyway if the business grows. Wilhelms draconian coding efforts just postponed this situation.
Maybe the REAL WTF! is that the article wasn't more clear at what the exact underlying problem was ;)
Admin
For that matter, why are you implementing your own DateTime?
Admin
Just about as gooooood a point as anything else so far.
I loved watching the Moon landing when I was a kid. I even loved playing with my dad's Fortran program to simulate it on a George III machine. It was great.
But it wasn't exactly a server system, was it?
Hell, I've written a bisync (technically ALC) driver in 8K of code that would support either the master or the slave side for around a hundred terminals. (The trick was that I used the code segment for the master side as the data segment for the slave side, and vice versa.)
But so what?
The only thing I've learned so far from this thread, and I haven't even learned it because I already knew it, is that you should never let a Web developer near a high-performance server system. In 95% of cases, they will just screw it up. Even after ten years in the wild, this is still a random collection of half-baked and non-orthogonal technologies, compounded by understandable but not forgivable ignorance of the underlying problems of scalability, latency, and resource consumption.
We're stuck with Web crap. The bubble has burst, a long time ago, but the so-called technology is still there.
It was the wrong way to go ten years ago, but unfortunately we now have ten years of investment in this drivel, and we're stuck with it. And with the hordes of idiots who claim they can produce a working system by using it.
I'd call that the biggest lost opportunity in history since I met Marilyn Monroe and tried to chat her up by telling her that "Norma Jean" is a really sexy name. And I wasn't even born at the time.
Admin
The man has Burger Flipper tatooed on his forehead.
Leave him to his meat patties. Don't taunt him. That's not nice, and it might result in a nasty burger flame war...
Admin
3.5K!?! I started with 2K on a Sinclair!
Admin
3.5k!?! I started with 1k on a zx81.
Admin
Perhaps it's to compensate for our huge brains?
Admin
It was probably an old server. Businesses usually don't like to buy new servers unless they absolutely have to.
Admin
Admin
That's the core of this WTF!. Although from reading the article we don't really learn whether it was a sensible rewrite or a maniacal and draconian exercise to fit more stuff in a tight space, at any length adding more RAM to the system would have probably mitigated the problems during these 5 months.
As I mentioned before, it would have at least given the people operating the server more time. If the system was operating with 512MB then, even if it was leaking memory, it would most likely not leak memory any faster when equipped with say 4GB of memory. So, 4GB of memory could just be used as a safety buffer. If you set an alarm to go off when memory usage reaches 512MB, then you still have about 7~8 times your average up time to reboot the server. In practice this would mean you can probably reboot the server at ease somewhere in the weekend in the middle of the night or so.
Wilhelm might have been correct to address any possible underlying issues, but he failed when he considered his solution the ONLY solution. Why not add RAM AND fix the underlying problem?
Admin
Indeed. "That would've been an option too" were Wilhelm's own words, not those of a PHB. Wilhelm also specifically remarked that his reasoning was that "no web application should ever require more than 500 megs".
Now does that sound as if memory leaks were at the core of the problem? Not really. If it was, Wilhelm would have been quick to say that adding more memory was only a senseless temporary solution. But he didn't say that. He couldn't say that if it wasn't the issue. Not when being in a meeting with other developers (he was about to discuss the 'architectural changes', you wouldn't explain these to a PHB).
Also read this:
That doesn't sound like solving memory leaks, does it? This sounds like minimizing memory buffers up to the point where it influences the stability of the application...
Admin
Even from a business perspective it can often be the right decision to spend the time optimizing. For example if you have a lot of customers and your product is not that expensive it can be too much to ask of your customers to upgrade their hardware. Things like memory usage can make the difference between making the sale and not. Just because so many people these days work on single instance environments where it is virtually always the best option to upgrade the hardware does mean that everybody does. It certainly does not make a five month optimization effort that likely turned out a slim pleasure to use product a WTF.
Admin
Feh. 128 BYTES on an PDP-3.
Admin
Quoted for sanity.
I don't know how anybody could possibly justify that the "fix the software" route was right, given the facts of the story as presented.
Firstly, stuffing 2Gb of RAM in the server may or may not have solved the problem (we don't know there was an actual memory leak). If it didn't solve the leak, what have we lost? Sixty dollars, according to the original story. That's certainly less than the cost of the first meeting the five developers had to discuss their strategy. Get some perspective, $60 is small change.
Let's say the $60 works in the short term, but as time goes by, things slow down and after another six months, they have to add another $60 worth of RAM. No, let's say that they have to throw $10K of servers at the problem every six months. That means the solution has to be running for ten years before the cost of the "throw hardware at it" approach exceeds the cost of the fix the problem approach on the assumption that the fix it approach costs $200K.
As described in the story, the approach taken was complete insanity.
Of course, all of the above is completely false if the problem really was a memory leak. But then you have to fire all of the developers for taking 30 man months just to find and fix a memory leak.
Admin
(1) Why would that have been Wilhelm's job in the first place - he is in development, not in data center operations ? Operations should have taken the memory approach all by themselves.
(2) Previous posters already named long purchase processes and/or related budget issues as reasons for lacking memory upgrade implementations. This can be an issue - especially in a government organization.
(3) It is very obvious to me that Wilhelm had management support for his 5 guys/5 months re-factoring project. You don't get 25 man months all by your own self without management buy-in for internal re-factoring. The reasons for the management support remain open to speculation but there it must have been there.
(4) In all likelihood Wilhelm's Team did more than just re-factoring. No project team working on re-factoring is ever left in peace by scope-creep. Even if this was a government organization, there must have been CYOA reason in the project requirement to allocate the resources. It's 2008, people - nobody is allocating 25 months for internal re-factoring of a widget web app. I can fairly see the "leveraging implementation synergy" catch phrases in the project proposal ... ;-)
Admin
In a number of poorly designed systems I've seen personally, there are methods within a class that are instance methods, although they perform no function that is specific to any instance. For instance (no pun intended), if you have a DateTime class that defines a "getCurrentDateTimeAsString()" method that retrieves the system's current date and time, you would be best off defining that as a static method because the system's date and time should be the same across all instances, thereby not require instantiation to be properly functional. But I've see numerous situations where, for some reason apparently known only to the developer, a method like that was defined as an instance method.[/quote]
Well, you can define "getCurrentDateTimeAsString()" as instance method to simplify unit testing. Otherwise you would have hard time mocking the method as far as I know.
Admin
Oops... I am sorry for deleting too much BBCode and missing the preview button :(
Admin
Not all companies have a separate data center operations. Most companies I know that basically operate a web application have their programmers also oversee the configuration of the server on which their web application is running. At the very least they advise a system administrator about the hardware that's needed.
Wilhelm's first task would have been to advise who ever is in charge of the actual hardware to install that extra dirt cheap memory module.
Wilhelm either neglected to do that, because of his stubborn believes that no web application should run in more than 500 MB, OR... Wilhelm tried to solve the problem within the realm where he has his power instead of communicating with "those other guys".
This latter option would be an all different WTF! You see this more often: Front-end programmers that try to solve a problem in Javascript/html while it should obviously be done at the server side in Java. Java programmers that try to fix a problem in Java code, while it obviously should be done in SQL and to come full circle, SQL coders that try to fix a problem in SQL while it very obviously should be done at the front-end presentation tier (yes, this sounds weird but it does happen).
My guess however is with the first option. Wilhelm falsely assumed that 512MB was a fixed limit and just completely ignored the idea that servers can be upgraded.
Believe me, it's a programmer that send in the original story. No PHB is going to spend his time writing something like this up and sending it to thedailywtf. Just because the article appeared here I believe the WTF! is what it seems: Wilhelm took 5 months to fix something in software which also could have been done with a simple memory upgrade. If it was indeed a memory leak then Wilhelm was partially right (although he still should have installed that extra 2GB first and then started the fix). Either way, there is a WTF! in what Willem did.
Admin
losers! 128 bytes on Atari 2600
Admin
I would guess this was a C# application written by people who had never heard of the 'using' statement, or a java app where great deal of resources that should have been closed and cleaned up were actually never closed but just 'leaked' to the garbage collector.
Makes one wonder how much of the problem would have been solved just by explicitly invoking the garbage collector at a few select locations.
TRWTF are the comments, as usual.
Admin
You would only have a hard time mocking the function if you didn't know what you were doing.
As a perl programmer, mocking functions is particularly easy. However, even in other languages I've worked in, there's generally a way. For example, for many compiled languages, there's the LD_PRELOAD environment variable.
Not to mention: why are you mocking it? Why not simply have your unit test factor in the concept that time changes? After all, without that, it's not exactly a valid test: you know your software did work once upon a time, but you don't know if it still works. (Yes, I know that this refutation is specific to time - that's why I answered the question before mocking it.)
Admin
Oh, so you're a Java programmer, then?
Admin
Admin
Admin
I just finished a CS program, and my classes covered software optimization two or three times. The first optimization step is always "see if buying faster/bigger hardware will make the problem go away".
Even once you actually get into changing the program for optimization, you look for asymptotically better algorithms, not for places to save a couple meg of RAM.
Admin
Try 256 BYTES of RAM (2KB of ROM) on for size sometime. (HC05K)
Admin
+1^100000000000000000000000000000000000000000
Admin
Admin
Actually VICs had 4K of RAM. If you chose to use uppper 512 bytes for display then that was really up to you ;-)