• pstnotpd (unregistered) in reply to jtl

    Well, that will greatly reduce the curriculum of any CS or software engineering course.

    Q : What can you do to solve any error? A : Throw in more hardware.

    PASS

  • Erick (unregistered) in reply to Fedaykin

    @Fedaykin,

    Seconded. Wilhelm can be a programmer at our site whenever he wishes. With the advent of GUI based development tools people have to deal with too much crapola like this on production systems. Functionally it usually works, but without code-optimization by hand: say bye-bye to your performance. I've seen an example that hardware to the tune of $100K was rolled into a datacenter. However when one of our DBA's took a quick look at the app configuration it was quickly spotted that a lot was amiss. He next adjusted the number of cursors and tweaked quite some DB indexes. Result: app is running now on an IBM X3650 while serving several millions of queries per day. CPU and IO load on average: less than 10%. The new $100K hardware? It stood idle for a month or two until a DataWarehouse app came along which really needed it.

  • Jimmy Jones (unregistered)

    Would extra memory have fixed the "unreleased resources", and "unclosed connections"?

  • (cs) in reply to noah
    noah:
    3.5k!? That would've been heaven! I started with a whopping 1k on my ZX-81.

    Luxury. When I started programming, users had to write down the memory values and enter them back in when the computer needed them. And we couldn't afford complete binary, so they had to do it using zeros only. With a screwdriver, wires, and duct tape.

    Then our dad and our mother would kill us and dance about on our graves singing hallelujah.

  • whocares (unregistered) in reply to Fedaykin

    Can't agree more. If there is a leak, it leaks.No matter how much hardware you throw into it. Developer should never assume that his code has infinite amount of resources at runtime.

  • wtf (unregistered) in reply to The Vicar
    The Vicar:
    noah:
    3.5k!? That would've been heaven! I started with a whopping 1k on my ZX-81.

    Luxury. When I started programming, users had to write down the memory values and enter them back in when the computer needed them. And we couldn't afford complete binary, so they had to do it using zeros only. With a screwdriver, wires, and duct tape.

    Then our dad and our mother would kill us and dance about on our graves singing hallelujah.

    Uphill both ways?

  • (cs) in reply to The Vicar
    The Vicar:
    noah:
    3.5k!? That would've been heaven! I started with a whopping 1k on my ZX-81.

    Luxury. When I started programming users had to write down the memory values and enter them back in when the computer needed them. And we couldn't afford complete binary, so they had to do it using zeros only. With a screwdriver, wires, and duct tape.

    Then our dad and our mother would kill us and dance about on our graves singing hallelujah.

    Allright....

    When I started programming, in -1.7 nBytes mind you, programmers had to chisel their byte codes with their jaws (then known as bite-code) on rock tablets, while viewing results in our monochrome displays (as in one colour - black).

    We used to wake up 1 hour before dawn, spend 34 hour day shifts and get back to sleep while eating broken glass (with SPAM of course).

    Our father would chop of our heads, sip wine of our skulls before killing us and singing hallelujah whilst dancing over our graves...

  • lolcats (unregistered) in reply to TGV

    I started with 80 bytes of RAM.

  • Bruteforce (unregistered) in reply to lolcats

    I find the answers gospeling the RAM increase as a valid fix quite interresting. The app were leaking resources and not just memory. Sooner or later it will hit a nice, solid wall of having bound up too much resources. Adding more ram wont fix that.

    Tossing in more hardware is a good way to buy more time to fix the real problem, as have been pointed out a few times here already. However, as have been seen on this very site on numerous occasions, the real fix wont happen as long as the system works reasonably well.

  • Tamahome (unregistered)

    TRWTH is obviously to increase physical server memory !

  • /dev/null (unregistered)

    Sounds like they ended up with a better system to me....

  • thomas (unregistered) in reply to Fedaykin
    Fedaykin:
    Sorry, but the WTF would be just throwing more memory at the problem, because all that would do is make the time between OOM errors longer.

    That's not nessesarily true. Only when there are in fact (big) memory leaks. If your application simply needs 500Mb, it's silly to try to run it with less with some complicated and unmaintainable hacks. Specially when RAM is so cheap.

  • (cs) in reply to Nate
    Nate:
    if this was a high demand app that was running on any descent server, 2GB of ECC ram for those sort of servers, is not $56, its more likely to cost over $1500
    Wow! You're paying far too much for your ECC RAM. Who are you buying it from? Sun? Apple? HP? Someone else with equally inflated prices? A quick google should find much better prices than that.
  • (cs)

    Why bother typing any code when I have my super-duper IDE do all the thinking for me?

    Hell, why do we all still force children to go to school to learn something, I mean, we could have robots do everything for us so that someone's head doesn't hurt when they're tying their shoelaces or find a place on a map.

  • Anonymous Cow-Herd (unregistered) in reply to Bruteforce
    Bruteforce:
    Tossing in more hardware is a good way to buy more time to fix the real problem, as have been pointed out a few times here already. However, as have been seen on this very site on numerous occasions, the real fix wont happen as long as the system works reasonably well.

    Precisely. Too many people forget that there's no such thing as a "temporary solution".

  • XYZ (unregistered)

    Sort of remind me of a story a friend of mine told me.

    It was a company that sold a service used by travel firms to book air tickets.

    They had performance problems, they had found out that it was due to the databases being implemented by "morons", they should roll their own database to show the world how its done.

    The management was convinced, after all they always strived to hire the most talented programmers so this would be awesome:

    It was very important that it was ready to roll before summer as well as people went on vacation then which meant that.

    1. The service was used alot
    2. No programmers around to fix bugs.

    4 programmers, 9 months = 3 man years.

    The end result:

    1. Many many bugs
    2. Worse performance

    But they just had to roll the new version as they hadn't worked on the old version at all.

    Many angry customers later it turned out that the server park consisted of 1 computer that ran the webserver and the databaseserver on standard disks and low-end CPU.

  • db (unregistered) in reply to Fedaykin

    The time to explore the "shove more memory in" option is before this guy is called in. The wrong person has been exposed to ridicule. Unfortunatly this sort of garbage can happen when costs are micromanaged so you have zero extra hardware budget but some sort of software budget.

  • (cs)

    I'm surprised more people aren't recognizing the inefficiency of spending 25 man months to only net a 60% savings.

    On one of my early career programming gigs, I was tasked with reducing the memory footprint of a particular app. The "server"'s memory was maxed, it was the top of the line model from the local computer store (and yes, this did mean it was a desktop running an end-user version of Windows, but that's not the point here) so could not be further upgraded, and the application fundamentally had to work on just one machine.

    When I got the task, the process was "working", but it was taking two weeks to crunch all of the data - the system was effectively running on swap.

    In a week's time, I had the process down to using 10% of the computer's memory, and it ran in an hour. The next week, I'd solved some analysis bugs, and further reduced the memory to 5% of the computer's memory, and a 10 minute run time.

    Note: I'm a unix guy. My solution did not involve using a unix system at all - although I did bring perl into the process, for some initial data cleanup.

    Also note: the original program loaded the entire dataset into memory at one time, and then processed it into a database. The larger the dataset grew, the more memory it would take. The final version I produced performed all of its calculations as it streamed it into the database, never storing more than one row of data in memory at a time, so its memory needs would not increase with a larger dataset, unless the length of the row grew.

    Final note: that was an extreme example. Much more typical optimizations are a 90% memory reduction in a man month. If the memory reduction is not from either a reduction of memory leaking or an overall scalability improvement, I feel unhappy and frustrated at the end of that month.

  • (cs) in reply to BK
    BK:
    The more garbage there is, the longer GC would take (freezing the whole application)

    In some programming languages. In others, GC time is relative to the number of objects which need to be freed per unit time, or relative to the number of objects which do not need to be cleaned. Also note that not all garbage collectors freeze the whole application (although I am unaware of any that allow memory to be allocated while the garbage collector is running.)

    Of course, GC only helps if the application knows the memory is ready to be freed - and in a memory leak situation, the application does not know the memory is ready to be freed. (In the classic case, the programming language could have determined it, but the languages didn't have detection capabilities. Today, since most of the languages do have said capabilities, memory leaks generally are not of that variety.)

    I personally prefer languages which free memory whenever it's clearly no longer needed, and only run the garbage collector when high memory conditions exist or the program is otherwise idle. It's much nicer not having pauses due to GC passes.

  • Tatar (unregistered) in reply to Hypothetical
    Hypothetical:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.
    You have only 1 KB of memory. Now what?

    1 KB is like an endless sea... Motorola MC68HC705J1A microcontroller, has 64 bytes of built-in RAM. Now say, you're Real programmer.

  • (cs) in reply to phil

    Well, managements Credo is: If it can be fixed for $60 than do it. If it holds for more than 1 year it's good. Who knows for which company I will work in one year time? When this stupid thing breaks another time I'll be long gone.

  • hmng (unregistered) in reply to anonymous

    I must be old. My first programs were on a Sinclair XZ-81 with 1K of RAM. Yes 1K. And that was shared with the "frame buffer", which was dynamic. You could get OOM errors just by PRINTing or DRAWing on too much of the screen!

    (Initially the "frame buffer" was a list of CR chars, just 23 bytes. It increased in size as put stuff on screen.)

  • Spikkel (unregistered) in reply to An Old Hacker

    As a temporary solution more memory (in other apps => faster cpu, more hdd, ...) is ok. But if the design is bad it'll only get worse since you're building on quicksand. A review with severe fixes on the worst places like the old dude is the only long-term solution for this problem.

    The real WTF here is communication since he couldn't sell his solution.

  • WigerToods (unregistered)

    TRWTF is that this non-WTF made it on this site.

  • Rabiator (unregistered)

    After reading the first page of comments, I wonder why there are only the two extreme positions of "just add memory" and "Wilhelm was right".

    My approach would be to go after the memory leaks, because those can exhaust any amount of RAM over time, but ignore the rest of the inefficiency.

  • (cs) in reply to ClaudeSuck.de
    ClaudeSuck.de:
    Write-only memory? Well, that must be useful.
    Sure, what do you suppose they make /dev/null out of? You can put a ton of stuff down there and none of it ever comes back.

    Most computers come with plenty of WOM on board, but if you fill it up your /dev/null stops working and you have to send your motherboard back to the factory to get it drained. See e.g. the datasheet for the Signetics 25120 9046xN WOM part.

  • (cs) in reply to SurturZ
    SurturZ:
    anonymous:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.

    16k!?! I started with 3.5k on a VIC-20, thankyouverymuch.

    Heheh truncating the command PRINT to PR to save three bytes. Removing all whitespace. Good times.

    Hah, I beat ya, I truncated the command PRINT to ? to save FOUR bytes!

    Nahh, I didn't, it didn't matter anyway because all the old CBM machines running MS BASIC used to tokenize all keywords down to a single byte in the $80-$ff range anyway. Removing the whitespace did make a difference though, you're right there.

  • (cs) in reply to Anonymous Coward
    Anonymous Coward:
    Hah! Try 768 bytes (512 12-bit words), not of memory but of instruction space! That was just last year, too.
    768 bytes? Luxury! When I started programming I had just 256 bytes, and I had to re-enter them by hand every time I powered up, and I were grateful for 'em! And seven LED segments should be enough for anybody!

    Later on I got the 512 byte expansion memory and 300baud cassette tape interface. Now that were real luxury, that were!

  • (cs) in reply to Ozz
    Ozz:
    anonymous:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.
    16k!?! I started with 3.5k on a VIC-20, thankyouverymuch.
    I started with 1k on a Sinclair ZX81 back in '81. Though, I did have some fun in college in the mid 80s with a 6502-based EMMA (25-key keypad and a 7-segment display - it was programmed in raw hex) but I can't remember how much RAM it had.
    I got started on a Sinclair (well, "Science of Cambridge" as his firm was called back then) Mk14. It had 256 bytes RAM, a hex keypad and a 7-segment display.

    Later I got the 512-byte expansion memory, which was just huuuuge, so I was really glad I got a cassette interface with it so I could save and load programs instead of entering them by hand every time I switched it on. :-)

  • Bulletmagnet (unregistered) in reply to anonymous

    Luxury! I started with the MITS Altair which had 256 bytes of memory. And it was uphill both ways.

  • WizardStan (unregistered) in reply to Bruteforce
    Bruteforce:
    I find the answers gospeling the RAM increase as a valid fix quite interresting. The app were leaking resources and not just memory. Sooner or later it will hit a nice, solid wall of having bound up too much resources. Adding more ram wont fix that.

    Tossing in more hardware is a good way to buy more time to fix the real problem, as have been pointed out a few times here already. However, as have been seen on this very site on numerous occasions, the real fix wont happen as long as the system works reasonably well.

    Who says the problem needs to be fixed? Not all software needs to be perfect. If it's leaking memory just a bit at a time, a weekly or monthly reset script is by far the cheaper, easier solution. It's a sad state that that is the way business needs to operate, but... that's the way business needs to operate. In the ideal world you would have all the time and resources to fix it, but in the real world you can't always afford that luxury. 5 developers spent 5 months working on old code that arguably didn't need fixing when they could have been working on new code and patches that did need to be done. The WTF is that no one, not even the boss, did any sort of cost/benefit analysis and asked for other solutions.

  • Indima (unregistered)

    So really what is the WTF supposed to be in this in this meaningless story? That interns can be clueless? Of course they didn't re-architect the whole codebase only to reduce the memory footprint. They did it cause they wanted stable, maintainable code, that didn't crash the server every single day. The only thing Bob did was to embarrass the everyone at the presentation and come out to everyone as a clueless jerk.

  • anoncow (unregistered) in reply to anonymous

    3.5k???? I started with 1k on a ZX-81, thankyouverymuch.

  • (cs) in reply to Jeff Rife
    SomeCoder:
    I don't think 2 GB of RAM is going to come down in price THAT much in 7 years. Currently, the price of 2 GB of RAM - at the cheapest place I know of - is about $120 which is a lot more than $20.

    In 2001 128MB of PC1600/PC2100 cost about £200. 7 years later it costs less than $10.

    So in 7 years time I would expect 2GB RAM to be easily less than $20 as it is currently around $100.

    Check out http://www.willus.com/archive/timeline.shtml for the numbers. And think about what was a common computer in use 7 years ago. On the kind of developer machines I was using at work in 2001 then 256MB was common and seen as a large amount.

  • Rob G (unregistered) in reply to Fedaykin

    Only if the memory footprint was increasing while running. If the memory footprint was pretty stable, say going up to 512MB and staying there, then get more memory. If, OTOH, its going up at a steady rate, there's surely a fix needed.

  • Rob G (unregistered) in reply to anoncow

    I remember them too - some of that 1K was actually given over to screen memory.

  • watkin5 (unregistered) in reply to anonymous
    anonymous:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.

    16k!?! I started with 3.5k on a VIC-20, thankyouverymuch.

    Luxury. I wrote a control system for a robot arm with 1k on a ZX80.
  • Christian (unregistered) in reply to Dave G.
    Dave G.:
    It's obvious that most people here have no business training whatsoever.

    5 months (someone estimated this at $200,000) to fix the problem, compared to buying a RAM module for $60? How is this even a discussion? How can you possibly argue that the extra $199,940 that was spent was worth it? Hell you might as well rewrite the app from scratch.

    I've been programming my whole life and while I love the idea of optimising the hell out of something and coming away with a positive result (50% improvement is very impressive), the business side of me could never, ever justify such a huge disparity in time and cost.

    I know the extra RAM doesn't actually solve the problem and it's a temporary fix. As such, while the code is being maintained through its normal course, small periods of refactoring / optimisation should be conducted to incrementally improve the application.

    But for gods sake, blocking off 5 months of time to work on something like this is financial suicide. For a public company, it would be borderline criminally negligent to waste so much money so cheaply.

    What if it took 12 months to fix this? That's $480,000. How about 2 years for almost a million dollars? When do you say "ok, that $60 RAM bank is looking pretty good now"?

    Hint: the correct answer is not "we will never say that, we should spend all the time necessary to fix the application, no matter what the cost is". If you disagree with this statement, then I'd advise you never to start your own business, because you will be bankrupt within a year.

    Get some perspective please my fellow geeks. I know it's cool to hate on "business decisions", and "managers" but this one isn't close. This time, the geeks have it wrong. Trust me.

    I understand where you're coming from, there has to be a line drawn somewhere, but I'm confused as to how you can't see the benefit of the investment in the given story. First off, $60 is the cheapo end user version. If it's a server, chances are memory pricing will be significantly higher.

    But let's disregard that. This is a corporation, it's expected they pay for the RAM and the optimization. In 5 months' time, their software is optimized, can be distributed to their clients through a patch, 4 programmers have had some seriously old-school, high impact real-time training, and the software will no doubt be much easier to upgrade now that everything has been commented, streamlined, etc. The heap of the work is already finished.

    Code from scratch in 5 months, on a whim? I'm not so sure. Even if you could come up with a schedule to release it as such, there are always issues cropping up pushing said deadline back.

    If nothing else, think of the rapport. All their clients using said software suddenly have this amazing upgrade that doesn't cost them anything, fixes their stability issues and speeds everything up.

    Word of that kind can't be easily bought and it spreads fast.

  • (cs) in reply to SomeCoder
    SomeCoder:
    That's still $100 I'd rather spend on something else than being required to put more memory in my computer because some programmer thought his time was worth more than mine. Yes, that's a little selfish of me but think about it: do you value your time more than mine? I'm sure you do, just as I value my time AND money over some other programmer's that I don't even know :)

    For an application that someone is selling then that is a valid argument, especially if it has a lot of users where each user needs to buy RAM, or where it is targetted to a limited platform such as mobile phones. But reading between the lines of the post you were replying to it was talking about an application that is running in-house on one machine, and the person who would get the code improved is the same person who needs to buy & install new boxes.

    In an ideal world we would all have beatifully tuned software that needs minimal RAM & CPU but often there are other priorities.

  • (cs) in reply to ha
    ha:
    You are an idiot

    This from someone who has yet to learn how to use the quote button. Impressive.

  • Edss (unregistered) in reply to Fedaykin
    Fedaykin:
    Sorry, but the WTF would be just throwing more memory at the problem...

    Throwing more memory at the problem is only yet another duct tape and chicken wire solution.

    So why not put the memory in so it at least runs better for the 5 months that all the developers are working on the optimisation.

    But I am a little concerned by the, "2GB of memory for fifty, sixty bucks", in a day when a production web server has 512MB. Doesn't quite make much sense.

  • KD (unregistered)

    So this is a "submitter WTF", right? We're constantly reading here about unmaintainable software that desperately needs rework, yet the pointy haired management types insist that a cheap hardware upgrade will do the job. And so management short-sightedness delays a whole bunch of maintenance overheads that are surely going to rear their ugly head sooner or later and the WTF is complete. But here we have a team that do the right thing and improve the software rather than bolt in more hardware as a temporary band-aid.

    Yet, someone thought that this in itself was a WTF and submitted it here. So the writers have crafted a very clever "anti-WTF" where the WTF is actually that someone thought this was a WTF and submitted it.

    Right?

  • Sam (unregistered)

    "Wilhelm was given four developers to help with the task."

    None of whom asked the same question.

  • Billy Bob (unregistered) in reply to Dave G.

    It's obvious that most people here have no business training whatsoever.

    Right. No doubt you'd argue that a patient slashed from shoulder to groin just needs a bandage to fix the problem, right? It's the cost-effective solution, conserves on the time of those ridiculously expensive doctors, and frees up beds for the gunshot wounds. Besides, our legal department can handle any blowback from the possible bad outcomes due to the treatment, and we did provide a help center in India to deal with any concerns by the victim (let them do some work for a change!).

    Sounds like the right option to me. I don't have any business training, but I know our stockholders will be pleased.

  • (cs) in reply to dkf

    Nate: you're paying WAY TOO MUCH for your ram

    Kingston 4GB (2 x 2GB) 240-Pin DDR2 SDRAM ECC Registered DDR2 400 (PC2 3200) Dual Channel Kit Server Memory - Retail

    http://www.newegg.com/Product/Product.aspx?Item=N82E16820134267

    $140

  • (cs)

    Here is the opposite story (true, I swear): Several years ago, I assisted a company building a internet web-app for order entry for a telco. The architecture was active record + business layer + template engine. Eventually, the system grow to 600 forms and was used by several hundred concurrent users. With growing application size and user number, the system run out of memory quickly, and as a quick fix they bought IIRC 2GB RAM. Which was pretty expensive back then, 2000 USD or so. Then I learned about the problem, analysed it and saw that the problem came from the template engine that kept the parse trees of the templates per user. Changing that to a global template cache dramatically reduced the memory footprint... and that fix took only a few hours.

  • Master TMO (unregistered)

    I think the WTF on Wilhelm's side is that he didn't even consider the possibility of the hardware fix. As the story is told, he was so stuck in the past that 'no webserver should need more than 512k', even though webservers can easily have GBs today.

    His team gets kudos for actually fixing all those problems though. In my experience, being allowed to do that almost never happens. Kludge and band-aid until the whole system is practically a mummy, and hopefully by then there will be a new application available that can completely replace the doddering heap that is left.

  • WizardStan (unregistered) in reply to Billy Bob
    Billy Bob:
    It's obvious that most people here have no business training whatsoever.

    Right. No doubt you'd argue that a patient slashed from shoulder to groin just needs a bandage to fix the problem, right? It's the cost-effective solution, conserves on the time of those ridiculously expensive doctors, and frees up beds for the gunshot wounds. Besides, our legal department can handle any blowback from the possible bad outcomes due to the treatment, and we did provide a help center in India to deal with any concerns by the victim (let them do some work for a change!).

    Sounds like the right option to me. I don't have any business training, but I know our stockholders will be pleased.

    That's not a very good analogy. I could work with it, but it'd be quite exagerated. A better analogy would be if the person cut their finger pretty badly. You can either slap a bandage on it and hope that's enough (add ram), do a simple stitch job to ensure the wound is closed (fix key memory leaks), or cut off the patients arm and replace it with a cybernetic implant. That's the benefit of a cost analisys, so you can see just how much work needs to be done and which actually yields the greatest payoff. Seems to me that the story should have been either a bandage or stitch job: if the leaking (or bleeding) is reasonably managable, just a bandaid will work. If it's a bit worse that it will continue to bleed out around the bandage, then for sure it needs to be stitched up. But Wilhelm chose the third option: rewrite quite a bit which, while not the most efficient, was certainly acceptable to the task. Maybe if the patients finger was not only cut but also infect with an alien parisite this would be an excellent option, but why waste time and money on fixing other things in the name of efficiency which otherwise worked just fine?

  • (cs) in reply to FredSaw
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.

    Lyle could've done it with 4K.

  • (cs) in reply to KD
    KD:
    So this is a "submitter WTF", right? We're constantly reading here about unmaintainable software that desperately needs rework, yet the pointy haired management types insist that a cheap hardware upgrade will do the job. And so management short-sightedness delays a whole bunch of maintenance overheads that are surely going to rear their ugly head sooner or later and the WTF is complete. But here we have a team that do the right thing and improve the software rather than bolt in more hardware as a temporary band-aid.

    Yet, someone thought that this in itself was a WTF and submitted it here. So the writers have crafted a very clever "anti-WTF" where the WTF is actually that someone thought this was a WTF and submitted it.

    Right?

    I think the actual WTF is that Wilhelm did the right thing, and then got crap talk for doing the right thing. Another of the common WTF's we usually see over here...

Leave a comment on “That Would've Been an Option Too”

Log In or post as a guest

Replying to comment #:

« Return to Article