• Anon (unregistered)

    There comes a point where time is more important than useless metrics.

    Wilhelm has long passed that point.

  • Huh (unregistered)

    I guess I'm WTF cause I agree with Wilhelm.

  • m k (unregistered)

    Surely Wilhelm had ultimately done the right thing? Re-engineered a bad solution into a much more efficient and by the sounds of it far more sensible and easy to maintain system. Yes, adding more memory is the short term solution, but what happens in a year or two if demand increases or more stuff needs to be added to the app. and its memory requirement jumps up again. It wouldn't scale as well in its original form.

    I find this article a bit hypocritical implying that re-designing a bad, over-engineered, patched system into a better more efficient one is a WTF. Considering most of the articles on this site talk about the short sightedness of most managers preferring short term fixes (like just adding more memory) in spite of the fact it will probably cost them more down the line, insinuating that here fixing a bad application was the wrong decision seems two faced!

  • BK (unregistered)

    There is a thing called Garbage Collection time. The more garbage there is, the longer GC would take (freezing the whole application). So, Wilhelm actually made an improvement.

  • Mikey (unregistered)

    You can't fix a memory leak problem in a virtual memory system addin more RAM, especially if the system is already running with virtual memory, you will move the out of memory condition a bit later. In Java anyway the maximum usable amount of memory is normally a VM option.

  • An Old Hacker (unregistered) in reply to Anon

    Problem (1) 0 code discipline --> (Customer invisible) result (1) lousy code --> (Customer invisible) result (2) memory leaks --> (Customer visible) result (3) reboots.

    Intern solution: increase server memory.

    I can see this as an interim solution. Wilhelm was almost certainly the wrong guy for the job, but there was originally only one problem.

  • SoonerMatt (unregistered) in reply to BK

    OK, so far it looks like we have a consensus. The real WTF is covering up undisciplined, problematic code with more hardware. Wilhelm's discipline and efficient use of resources should be commended.

  • DJ (unregistered)

    That is the problem with these old school programmers. Most of them are not that well versed with what is going on besides what they actually know. They don't really want to admit that someone else can come up with a better solution that what they can come up with. They are good to have on your team but they are pain in the A$$ at the same time.

  • jwin (unregistered) in reply to An Old Hacker

    I believe the biggest issue here would be that it took Wilhelm's team 5 months of work with a full team to re-engineer the code. This could've been a back-burnered project while the short-term solution was implemented, with a smaller team and less stringent performance increase requirements.

  • Kyle (unregistered)

    Emotionless? Wilhelm is always screaming.

  • Brian (unregistered)

    That's right, just throw more hardware at a software problem! (Thick sarcasm intended.)

  • Anonymously Yours (unregistered) in reply to m k
    m k:
    Surely Wilhelm had ultimately done the right thing? Re-engineered a bad solution into a much more efficient and by the sounds of it far more sensible and easy to maintain system. Yes, adding more memory is the short term solution, but what happens in a year or two if demand increases or more stuff needs to be added to the app. and its memory requirement jumps up again. It wouldn't scale as well in its original form.

    I find this article a bit hypocritical implying that re-designing a bad, over-engineered, patched system into a better more efficient one is a WTF. Considering most of the articles on this site talk about the short sightedness of most managers preferring short term fixes (like just adding more memory) in spite of the fact it will probably cost them more down the line, insinuating that here fixing a bad application was the wrong decision seems two faced!

    Agreed. When all you do to fix a problem like that is add more memory you will never be given the time to fix the code. Since it's a web app, you can expect two things to increase over time: the load and the feature list. You can't build a castle in a swamp regardless of the time-to-market speed. At some point someone is going to have to twist management's arm for time to fix the code.

    Maybe Wilhelm was more draconian than the job called for. The job still needed to get done. Throwing money at a problem instead of resolving it is WTF.

  • (cs)
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.
  • (cs) in reply to FredSaw
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.

    Butterflies.

  • d000hg (unregistered)

    So 5 people worked for 5 months. That's basically 2 man-years of salary, which works out as what, $200K?

    Is that better or worse than buying more hardware? Guess it depends on the future of the project.

  • Hypothetical (unregistered) in reply to FredSaw
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.
    You have only 1 KB of memory. Now what?
  • BadReferenceGuy (unregistered) in reply to Anonymously Yours
    Anonymously Yours:
    You can't build a castle in a swamp regardless of the time-to-market speed.
    Sure you can. And version 3.1 will be the most solid castle on the Internet (After versions 1 and 2 sink into the swamp, and version 3 burns down, falls over, and then sinks into the swamp.).
  • Jason0x21 (unregistered)

    Bumping up memory on a leaky application only delays the inevitable. From the description, the code was running out of memory faster and faster (which seems to indicate a leaky (and getting worse with each patch), rather than simply inefficient application).

    If rearchitecting was the ultimate solution, though; the 2GB solution might have been a good stopgap measure while the application was redone (instead of redoing the application twice).

    Wilhelm's only fault seems to have been not knowing how cheap memory was. WTF, indeed.

  • (cs) in reply to FredSaw
    FredSaw:
    Bah! Real programmers started with 16K.
    No, real programmers started with 4K words, but that's another story.

    Anyway, the memory expansion suggestion might have been a cheap way of circumventing the problem if the extra memory demand was the result of increasing data sets (that kind of thing can happen to a running system). Still it would mean that the system would run out of memory sooner or later (4Gb is still a major hurdle). So Wilhelm's solution would still be better.

    And throwing hardware to cover up a bad design is ... just horrible.

  • Salami (unregistered)

    5 months work for 5 guys vs $60. You guys that agree with Willhelm must all be on the receiving end of paychecks and not the one writing them.

  • fruey (unregistered)

    Where I work, there's a cycler. An automated batch that reboots servers in a queued order to avoid OOMs.

    More hardware (all at 4GB memory) already didn't work, and now the cycler is just a horrible hack to cover up for exceptionally poor coding standards.

    The man hours lost (debugging and hacking shoddy code) has to be added to the equation as more memory never works for very long (esp. with each passing release which adds more badly coded features).

  • (cs) in reply to Salami
    Salami:
    5 months work for 5 guys vs $60. You guys that agree with Willhelm must all be on the receiving end of paychecks and not the one writing them.

    Yes. Management will always do the wrong thing, given the opportunity. Wilhelm was absolutely correct. Bandages don't heal wounds, only cover them. When the application is out of control, it needs to be re-engineered into something that can be scaled and maintained going forward.

  • (cs) in reply to Salami
    Salami:
    5 months work for 5 guys vs $60. You guys that agree with Willhelm must all be on the receiving end of paychecks and not the one writing them.

    True, but as was mentioned above step 1 is increase the ram, step 2 is rewrite the program. What happens in a year or two when the thing blows up because memory leaks weren't fixed and bad architecture continues? At some point a rewrite/redesign/reimplementation needs to be done. Do you do it now, while it's manageable, or 2 years down the road where it takes 5 men 12 months to rewrite?

  • Pony Gumbo (unregistered) in reply to Kyle
    Kyle:
    Emotionless? Wilhelm is always screaming.

    Well done, sir!

  • SoonerMatt (unregistered) in reply to Salami
    Salami:
    5 months work for 5 guys vs $60. You guys that agree with Willhelm must all be on the receiving end of paychecks and not the one writing them.

    Actually I am on receiving end of paychecks and using the ram as the ONLY solution would have resulted in more pay.

    Sure adding ram and not rushing the refactoring would have been the best decision.

    Postponing a software solution would have resulted in massive emergency / sloppy code fixes that cost more in the long run.

  • el jaybird (unregistered) in reply to BadReferenceGuy
    BadReferenceGuy:
    Sure you can. And version 3.1 will be the most solid castle on the Internet (After versions 1 and 2 sink into the swamp, and version 3 burns down, falls over, and then sinks into the swamp.).

    I'm holding out for version 95, or at least NT.

  • oldami (unregistered)

    Who needs to go back decades? Ever programmed on a smart card? Yeah, the new ones have over 64K, but that flash (slow). If your algorithm exceeds the actual ram (typically 128 bytes) then it starts using flash. We have run in to this many many times.

  • Robble (unregistered)

    Apples vs oranges, as has been repeatedly pointed out in the comments. This sounds like one of those few times when the short-term expensive (but long-term cheap) thing was done. Developer time may be more expensive than RAM, but it's far cheaper than all the support time that's booked afterwards...

  • (cs) in reply to Salami
    Salami:
    5 months work for 5 guys vs $60. You guys that agree with Willhelm must all be on the receiving end of paychecks and not the one writing them.
    Sorry dude, but adding memory only prolongs the inevitable. What happens when you can't add more ram (e.g.: the box is maxed out), or the gc is just grinding and the app freezes?

    While the opportunity to do a good re-rewrite is all too rare, it invariably winds up much cheaper in the long run when you figure in opportunity cost (e.g.: the system is down and you lose business), reduced maintenance costs and shorter upgrade time to market.

  • SomeCoder (unregistered)

    Yes, I completely agree with what has been said here. Wilhelm was completely correct in doing what he did. Yes, he was maybe a bit hard nosed about things but I tend to agree with him on one point: programmers are lazy asses these days.

    Why is it that hardware gets faster but our programs actually run slower? Because programmers are lazy: "Hey, we've got 4 GB of RAM now, so I don't really need to be careful with memory."

    This is, to me, completely unacceptable. It is just as unacceptable as saying "Well, the application runs out of memory constantly so the fix is to buy more RAM." That is asinine and doesn't solve the real problem.

    If the app was already designed well and was running out of memory because your server only had 512 MB, that's one thing. Since the article specifically states that that is not the case, the real WTF is that Wilhelm had to be embarrassed for implementing changes that reduced the memory footprint by 50% (a nice feat in and of itself).

    He should have just screamed and jumped out of the window. You never miss a Wilhelm scream :)

  • Neil (unregistered)

    It seems pretty obvious that the company was the kind of company that prefers bandages to cures, given their policy on releasing software, and patching it later (Ive worked at a place just like it). Its no surprise that the manager was redfaced that his problem would go away for 12 months if they spent $60 on more RAM. Its most surprising that another developer would "rat out" Wilheim!

  • Wilhelm (unregistered) in reply to FredSaw

    Commodore 64s only had 38k of RAM. The rest was write- or read-only memory

  • NE (unregistered)

    Honestly, this shows a lot about how we've evolved with computers. Instead of making things more efficient now a days, we can really just raise the requirements and run with that. Many of the critics of Windows love to harp on that issue when it comes to memory usage. While cleaning out the crap in this application has made it a lot better, (hopefully) blindly shoehorning the application into this memory footprint wasted quite a bit of developer resources.

    Also, as I recall from school, there can be a trade off between reducing memory usage and reducing hard drive usage. For an example of that type of trade off there is the kkrieger: http://www.theprodukkt.com/kkrieger It's a complete 3D shooter in 96 kilobytes of hard disk space. It also uses a quite a bit of memory, and system resources, in order to achieve this incredibly small foot print.

  • Wilhelm (unregistered) in reply to Hypothetical

    Buy some more memory.

    Did you not read the article?

  • Nerdguy (unregistered)

    Funny story. Years ago I worked at a company that managed the website for a large publishing concern. Let's call that publishing concern "Petrel Publishing". They used an unweildy CMS written in tcl/tk (You can guess which one), running on Solaris boxes. We had recently moved the site to a new set of servers and location, and things were going awful. They'd crash outright every time the traffic went even slightly up. Hard deadlock. Doom. Even with 2GB per server, and quad UltraSparc CPUs. So we'd just log in and restart the server and blame the damned CMS, until eventually the bosses "Somewhere in EMEA" got tired of it and demanded we fix it post haste.

    So all the young unix nebbishes are sitting around trying to think of reasons that the software sucks, and whether we could get away with cron'ing something up, when our CIO, who'd started programming back when Data General still made hardware, comes in, listens to us for ten minutes, squints a bit and says "What about ulimit?".

    Now, we were all Linux admins. Sure, we'd used Solaris, and adminned boxes that had already been set up, so we were a bit unfamiliar with Solaris' ulimit settings.

    By default, the rlim_fd_max on solaris is 64. So any process can only open 64 concurrent files. Now, the CMS needed to generate and serve up thousands upon thousand of files, and would just choke when asked to do so.

    Needless to say, we upped it to 8k, and the site never crashed again.

    So here's to you, old-timers.

  • Wilhelm (unregistered) in reply to Hypothetical
    Hypothetical:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.
    You have only 1 KB of memory. Now what?

    Buy some more memory.

    Did you not read the article?

  • Anon (unregistered)

    Nowhere in the article was it mentioned that the memory is actually leaking.

    It's a large application. That's it.

  • tuna (unregistered)

    TRWTF is that an intern had the balls to actually say that in a room full of his superiors. awesome.

    Also ... surprised that Wilhelm couldn't come up with the obvious response to the intern's question, which many above in the comments have pointed out... more RAM is a stopgap and the problem would come right back if something permanent wasn't done about it. After that many years in programming that response should have been second nature.

  • MindChild (unregistered) in reply to Salami

    You must be new to the industry. When a manager sees that a problem of this magnitude can be fixed with $60 and 10 minutes of work, all of a sudden, that becomes the standard to him. Months down the line, when the app begins to blow up again, and this time you can't throw more hardware at it, the manager will say something along the lines of "Why would I give you the time and resources to RECODE this application when the fix last time cost me $60 and 10 minutes. Your solution just isn't feasible". So another hack is dreamed up. Then another. Then another. A couple of years down the road, you have an app that is SO BAD that it needs COMPLETELY rewritten. But then... you have to fight (and lose) with the manager who's expectations are, and will forever be, that a fix is a few dollars and a few minutes.

  • (cs)

    TRWTF is that he thought "staticalizing instance methods" actually reduces memory footprint.

  • Salami (unregistered) in reply to snoofle
    snoofle:
    Salami:
    5 months work for 5 guys vs $60. You guys that agree with Willhelm must all be on the receiving end of paychecks and not the one writing them.
    Sorry dude, but adding memory only prolongs the inevitable. What happens when you can't add more ram (e.g.: the box is maxed out), or the gc is just grinding and the app freezes?

    While the opportunity to do a good re-rewrite is all too rare, it invariably winds up much cheaper in the long run when you figure in opportunity cost (e.g.: the system is down and you lose business), reduced maintenance costs and shorter upgrade time to market.

    I'd agree with you if the server had a normal amount of memory (which would be at least 2 GB). The first thing to do is to bump up the memory and then see what that gained you. You don't spend 2 man-years on a problem without exhausting all easier options first.

  • krupa (unregistered) in reply to Wilhelm
    Wilhelm:
    Hypothetical:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.
    You have only 1 KB of memory. Now what?

    Buy some more memory.

    Did you not read the article?

    Someone stole your wallet. Now what do you do?

  • SoonerMatt (unregistered) in reply to ounos
    ounos:
    TRWTF is that he thought "staticalizing instance methods" actually reduces memory footprint.

    Are you suggesting finalizing instead? (honest question)

  • Wilhelm (unregistered) in reply to ounos
    ounos:
    TRWTF is that he thought "staticalizing instance methods" actually reduces memory footprint.

    My interpretation was that the preamble, about the various bad features of the codebase, had nothing to do with the memory footprint, it was just irrelevant (but valid) criticism of the codebase.

    To assume that the dreaded Wilhelm was the guy who thought instance methods causes out of memory exceptions is a bit of a stretch.

    Actually it kind of made the whole article seem a bit rubbish.

  • Wilhelm (unregistered) in reply to krupa
    krupa:
    Wilhelm:
    Hypothetical:
    You have only 1 KB of memory. Now what?

    Buy some more memory.

    Did you not read the article?

    Someone stole your wallet. Now what do you do?

    Kill the interviewer and steal his wallet.

  • Charles (unregistered)

    The intern wrote this article.

  • poke X, peek (X) (unregistered) in reply to Wilhelm

    No, there was a full 64K of RAM. For instance: you could copy the BASIC ROM into the RAM memory underneath it, page it out, and have a RAM copy of BASIC that you could muck about with. Which usually meant "turning the error messages into swear words".

  • (cs) in reply to Hypothetical
    Hypothetical:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.
    You have only 1 KB of memory. Now what?

    I build an Apollo Command Module and a Lunar Module and fly to the moon. That's a whole lot more memory than those systems had...

    /kids today

  • heise (unregistered)

    So, if we add enough memory to the server, the memory leaks will fix themselves?

    Nice reasoning.

  • IBMer with a brain (unregistered) in reply to m k

    I'm also completely with Wilhelm. As soon as you want to extend that application, or (god-forbid) consolidate multiple applications onto fewer servers (and save some REAL money), you'll be thanking Wilhelm.

    Okay, so this is the real world, and all of Wilhelm's contributions will have been forgotten thanks to a snarky intern named Bob.

    Still, I'm amazed that this is posted in such a style that we're supposed to think throwing resources at the problem and hoping it will go away is the correct course of action, and proper engineering is the WTF. Did the pointy-hairs get ahold of the site logins?

Leave a comment on “That Would've Been an Option Too”

Log In or post as a guest

Replying to comment #206537:

« Return to Article