• SomeCoder (unregistered) in reply to WizardStan
    WizardStan:
    SomeCoder:
    I wouldn't say that's a good excuse for poor code. I'd also say that your dollar estimates, even for 2015, are way too low. I don't think 2 GB of RAM is going to come down in price THAT much in 7 years. Currently, the price of 2 GB of RAM - at the cheapest place I know of - is about $120 which is a lot more than $20.

    Also, in the meantime, your users suffer because the server crashes a lot and when it doesn't crash, it takes forever to load pages. That's a large amount of lost revenue that you could be gathering but aren't because users can't see/click on ads and/or go to competing products.

    www.tigerdirect.ca/applications/SearchTools/item-details.asp?EdpNo=3404049&Sku=O261-8038 There, 4 gigs for $100. I just saved you $20 and doubled the amount you get. By Moore's law, which has held very steady and should continue to do so until about 2020 according to recent reports, the size and cost of the memory should half in about 18 months, so by 2010, that same 4 gigs should be $50. By mid 2011, it should be $25 for 4 gigs of ram, assuming past trends continue more or less as they have. But that's all moot anyway, because $50 or $120, it's still far less than even one day of effort to find and fix any large memory problems. And you speak of downtime and lost revenue due to reboots? Wilhelm's (and by extension your) solution to rewrite the code comes in at 5 months of daily crashes and reboots. The intern's (and by extension my own) solution of more memory comes down to one (maybe two) days of crashes and reboots to replace the ram. A week, perhaps, if there's a lot of paperwork to fill out and sign. Even a month, if it's a particularily troublesome company, is still less.
    I'll agree, rewriting was the right solution from a developer standpoint, but there are times in real world business where you have to step back and say it'll just cost too much to fix. This, of course, is all under the assumption that memory leaks were, at most, a small part of the problem, if there were any at all. If the problem really was that it was leaking memory like a seive, then the correct solution most definitely involves rewrites. But first, get some more ram into that sucker so it doesn't need to be restarted so often.

    That's still $100 I'd rather spend on something else than being required to put more memory in my computer because some programmer thought his time was worth more than mine. Yes, that's a little selfish of me but think about it: do you value your time more than mine? I'm sure you do, just as I value my time AND money over some other programmer's that I don't even know :)

  • Jay (unregistered)

    Years ago I heard one of the best pieces of programming advice I've ever heard: A co-worker said that his mother always told him, "Never walk through a room without picking something up." He applied this principle to programming: Every time he had to make a change to a program, he would make a little extra effort to do some clean-up, so that the program was now more efficient, more maintainable, or otherwise generally better than it was before.

    The beauty of this approach is that you don't have to devote hundreds of man-hours to a dedicated clean-up project. You just tack a few extra percent onto every project, probably an amount that would be lost in the general estimation errors anyway.

    Unfortunately, I've been in many jobs over the years where management forbids this approach, on the reasoning that fixing anything not directly related to the current change introduces the risk that you will create new bugs. This is true, of course, but the inevitable result of that philosophy is that you doom yourself to an entropy spiral: Every change makes the system a little worse, which means that every change becomes more difficult to make as the structure becomes more rickety and unstable, until the whole thing collapses.

  • cozappz (unregistered) in reply to Fedaykin
    Fedaykin:
    Sorry, but the WTF would be just throwing more memory at the problem, because all that would do is make the time between OOM errors longer. It wouldn't solve the core problem of poor resource management in the system.

    Now, from TFA, I would guess there is a big chance that the effort was way overboard, but the core intent was correct: fix the source of the problem.

    Throwing more memory at the problem is only yet another duct tape and chicken wire solution.

    Jeez, got one of those app for which 8GB is not enough, but noone wants to fix those memory leaks. So we restart the app everyday at midnight. Way to go, Bob!

  • Steve (unregistered)

    Let's please stick to the topic at hand: Commodore PET BASIC and VIC-20 architecture.

  • (cs) in reply to Fedaykin
    Fedaykin:
    Sorry, but the WTF would be just throwing more memory at the problem, because all that would do is make the time between OOM errors longer. It wouldn't solve the core problem of poor resource management in the system.

    Now, from TFA, I would guess there is a big chance that the effort was way overboard, but the core intent was correct: fix the source of the problem.

    Throwing more memory at the problem is only yet another duct tape and chicken wire solution.

    And what's the big problem with making the time between OOM errors longer? If it goes from once per day to once per week, on a production machine, that means fewer missed transactions, fewer lost sales, less time lost on reboots, and generally money saved or less income lost. It's a good idea to spend a trivial amount of money while you're working to resolve the root cause. Both William and Bob have it wrong. You need to put in Bob's solution to stop the bleeding for five months, while at the same time reworking the code to stop memory leaks/reduce footprint/improve code.

    For crying out loud, you could be going bankrupt while you're optimizing your code.

  • Jay (unregistered) in reply to Leon
    Leon:
    Wilhelm:
    krupa:
    Wilhelm:
    Hypothetical:
    You have only 1 KB of memory. Now what?

    Buy some more memory.

    Did you not read the article?

    Someone stole your wallet. Now what do you do?

    Kill the interviewer and steal his wallet.

    Your interviewer is an undead zombie and cannot be killed. Also he has no wallet. Now what do you do?

    Answer #1: Buy a magic silver stake to kill the zombie. (Oh, wait, my wallet was stolen, I don't have any money. Bummer.)

    Answer #2: All the computer stores in the world are out of stock on memory chips. What do you do now?

  • dantaylor08 (unregistered) in reply to Tom

    It's the principle of the thing though. 2 years from now, when somebody who didn't write the code is going to have to o maintain code that is full of crap. they're going to have a lot of trouble finding/fixing errors. Even if its 'not that big a deal' to add a couple gigs of ram to the box, it's much better just to get the job done right the first time, and not have to deal with issues caused by bad code later.

  • Daniel (unregistered)

    Anyone who said there was nothing in the article indicating a memory leak is WTF. Go learn how to debug software.

    The server needed to reboot regularly, and with increasing frequency. When it got to reboot everyday, they decided to do something about it.

    Hey, guys, if there isn't a memory leak, then rebooting doesn't help. Period.

    The only thing that reboot does is releasing unneeded resources. If the server can run through the day, including, of course, the peak hours, and after a while -- originally a few days -- gets to the point it needs to be rebooted, and rebooting helps, then there are unneeded resources in memory. Ie, a memory leak.

    So, to all of you who think buying memory would have solved the problem and there was no evidence of memory leaks... well, LEARN something. It might help you one day.

  • iToad (unregistered) in reply to FredSaw
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.

    Bah!! An 8051 has 128 -Bytes- (not KB) of memory. Some of this is unavailable because it is memory-mapped to chip registers. Try fitting the heap and stack of your fancy C program into one of these.

  • WizardStan (unregistered) in reply to SomeCoder
    SomeCoder:
    That's still $100 I'd rather spend on something else than being required to put more memory in my computer because some programmer thought his time was worth more than mine. Yes, that's a little selfish of me but think about it: do you value your time more than mine? I'm sure you do, just as I value my time AND money over some other programmer's that I don't even know :)
    That's irrelevant to the story and my own anecdote. In these cases, the funds for the RAM and the developers comes from the company. We're not talking about a boxed app which is going out to clients. These are apps developed in house with specific purposes in mind. One way or another, you're paying for the solution. You're the boss then. Which is more valuable: $60 and immediately fixed, or $100'000 and fixed in 5 months?
  • CoderHero (unregistered) in reply to Daniel
    Daniel:
    Anyone who said there was nothing in the article indicating a memory leak is WTF. Go learn how to debug software.

    The server needed to reboot regularly, and with increasing frequency. When it got to reboot everyday, they decided to do something about it.

    Hey, guys, if there isn't a memory leak, then rebooting doesn't help. Period.

    The only thing that reboot does is releasing unneeded resources. If the server can run through the day, including, of course, the peak hours, and after a while -- originally a few days -- gets to the point it needs to be rebooted, and rebooting helps, then there are unneeded resources in memory. Ie, a memory leak.

    So, to all of you who think buying memory would have solved the problem and there was no evidence of memory leaks... well, LEARN something. It might help you one day.

    It could just as easily be a memory leak as a scalability problem.

  • SomeCoder (unregistered) in reply to WizardStan
    WizardStan:
    SomeCoder:
    That's still $100 I'd rather spend on something else than being required to put more memory in my computer because some programmer thought his time was worth more than mine. Yes, that's a little selfish of me but think about it: do you value your time more than mine? I'm sure you do, just as I value my time AND money over some other programmer's that I don't even know :)
    That's irrelevant to the story and my own anecdote. In these cases, the funds for the RAM and the developers comes from the company. We're not talking about a boxed app which is going out to clients. These are apps developed in house with specific purposes in mind. One way or another, you're paying for the solution. You're the boss then. Which is more valuable: $60 and immediately fixed, or $100'000 and fixed in 5 months?

    Ok well then I pick the $60 and immediate fix.

    Oh wait, it's not immediately fixed. It just got some duct tape on it.

    Ok so it's been another 5 months now it's crashing daily again. Oh crap, what now? $60 more and another immediate "fix" ?

    Ok, 5 months later and it's crashing daily again...

    Repeat ad nauseum.

    Sorry but the 5 month fix is going to be worth way more in the long run. The app will run better, customers will be happier and you won't have to worry about the server crashing. We might even be able to create another site and put it on the same server because of the extra resources we saved.

    As for boxed apps vs web apps, I still think it's relevant. Both the boxed app that requires the end user to have 8 GB of RAM and the web app that barely functions is going to piss off the end user. The result is the same: users who don't like and don't use your product.

    So now you are the manager. Which do you pick: income flowing from end users or nobody using your website and getting laid off when the company "restructures" ?

  • Mr (unregistered) in reply to m k
    m k:
    Yes, adding more memory is the short term solution, but what happens in a year or two if demand increases or more stuff needs to be added to the app. and its memory requirement jumps up again. It wouldn't scale as well in its original form.

    But it only cut the memory in half. So when the old version would require 4GB, the old version would require 2GB. Not a lot of difference. And I suspect that once they reach that, they could probably use a second server as well.

  • Mr (unregistered) in reply to Salami
    Salami:
    5 months work for 5 guys vs $60. You guys that agree with Willhelm must all be on the receiving end of paychecks and not the one writing them.

    I agree. A business is about making money. It may be a bad implementation, but if they can fix the problem with a cheap purchace, it is the right thing to do. It is hard for us to realize that, since we have pride in our work. But sometimes the cost of perfection is too high.

  • Mr (unregistered) in reply to taylonr
    taylonr:
    True, but as was mentioned above step 1 is increase the ram, step 2 is rewrite the program. What happens in a year or two when the thing blows up because memory leaks weren't fixed and bad architecture continues? At some point a rewrite/redesign/reimplementation needs to be done. Do you do it now, while it's manageable, or 2 years down the road where it takes 5 men 12 months to rewrite?

    Buy a second server? You can buy a lot of servers with 25 months of paid work. You forget that the memory saved was under 50%. If the application lasts for 5 years and you have a total of 4 servers, the improved version would still require 2 servers. Would you rather spend 25 months rewriting the code, or spend a month's pay on a couple of servers?

  • Fedaykin (unregistered) in reply to Bappi

    I did not say that nothing should be done as a stop gap while the real fix takes place.

  • Franz Kafka (unregistered) in reply to MindChild
    MindChild:
    You must be new to the industry. When a manager sees that a problem of this magnitude can be fixed with $60 and 10 minutes of work, all of a sudden, that becomes the standard to him. Months down the line, when the app begins to blow up again, and this time you can't throw more hardware at it, the manager will say something along the lines of "Why would I give you the time and resources to RECODE this application when the fix last time cost me $60 and 10 minutes. Your solution just isn't feasible". So another hack is dreamed up. Then another. Then another. A couple of years down the road, you have an app that is SO BAD that it needs COMPLETELY rewritten. But then... you have to fight (and lose) with the manager who's expectations are, and will forever be, that a fix is a few dollars and a few minutes.

    If you have a boss that doesn't understand the difference between a stopgap and a permanent solution, make sure that he'll take your word or go get another boss.

  • DefEdge (unregistered) in reply to Fedaykin
    Fedaykin:
    Sorry, but the WTF would be just throwing more memory at the problem, because all that would do is make the time between OOM errors longer. It wouldn't solve the core problem of poor resource management in the system.

    Now, from TFA, I would guess there is a big chance that the effort was way overboard, but the core intent was correct: fix the source of the problem.

    Throwing more memory at the problem is only yet another duct tape and chicken wire solution.

    I would have to agree with this. He did something now that they would of had to do later down the road anyway. All the clutter was causing memory leaks, so all that work they did that seemingly wasted time and resources were a must. Although he probably won't be working there later, they will thank him.

    Actually, they probably won't... cause they won't have anymore problems so he'll get forgotten. A pity really.

  • (cs)

    Regarding just getting it working the cheapest way possible...

    My brother is a CPA. He has a small office with ~10 employees. Each PC has it's own licensed copy of the varioius accounting programs, but all are mounted from the network server.

    Everything was running win-98. Over the years, each annual upgrade of the accounting software would suck up more and more system resources until the only way to do two things was to exit the current program and start the other one, and then switch back when done.

    I warned him numerous times to upgrade his hardware gradually, but he complained that it would "cost a lot of money", and that the programs he used weren't guaranteed to work with the new OS.

    He put me in touch with the place that made the accounting software. I chatted with the lead developer, who essentially told me that they didn't care about ram usage; they just had to get the software out the door for tax season each year, and that they just couldn't worry about their customers' PCs. If someone left the software running too long and it ran out of memory, the solution was to tell the customer to reboot. They were also extremely far behind the curve (3 years) in supporting each new OS, which made upgrading your OS when purchasing a new PC something of a challenge.

    Then it finally hit the wall. Most of the apps sucked up so much ram that they wouldn't even load. He wound up buying new machines. But wait, Win-98 didn't have drivers for the newer hardware, so he had to upgrade to XP. But wait, the apps wouldn't run correctly under XP. Ok, compatibility mode. But wait, printing didn't work quite right. Ok, get just the right printer drivers. But wait, the network file server wouldn't talk to Win-XP correctly. Ok, upgrade the server and the OS to a server version of windows.

    And so it went for 3 weeks. During tax season. When the work was piled high and nobody could get anything done because of all the problems.

    It wound up costing him more than $100K in lost business because he didn't want to do $20K worth of hardware/os upgrades because the apps he needed were getting kludgier by the day.

    There's nothing he can do about the apps being behind the OS curve, but he now upgrades 1/3 of his hardware annually - it's essentially become a fixed annual cost.

    There are real costs associated with putting out crappy software, and taking shortcuts just pushes the cost onto someone else.

  • anon (unregistered) in reply to Andrew

    Economics: you fail it!

    Management spent 2 man-years worth of time that could have been used on other projects. Those man-years aren't free.

  • John (unregistered)

    Can you imagine how.... um.... strained working conditions would be with Wilhelm following that meeting....

    The real WTF IMHO is .... why didn't anyone else including "bob" think of that before said meeting....

  • Franz Kafka (unregistered) in reply to SomeCoder
    SomeCoder:
    Ok well then I pick the $60 and immediate fix.

    Oh wait, it's not immediately fixed. It just got some duct tape on it.

    Ok so it's been another 5 months now it's crashing daily again. Oh crap, what now? $60 more and another immediate "fix" ?

    Ok, 5 months later and it's crashing daily again...

    Repeat ad nauseum.

    Sorry but the 5 month fix is going to be worth way more in the long run. The app will run better, customers will be happier and you won't have to worry about the server crashing. We might even be able to create another site and put it on the same server because of the extra resources we saved.

    As for boxed apps vs web apps, I still think it's relevant. Both the boxed app that requires the end user to have 8 GB of RAM and the web app that barely functions is going to piss off the end user. The result is the same: users who don't like and don't use your product.

    So now you are the manager. Which do you pick: income flowing from end users or nobody using your website and getting laid off when the company "restructures" ?

    How about the $60 fix, then fix the memory leaks in chunks. Release every month or so until it's stable and nice again (I guess about 2-3 months).

  • maht (unregistered)

    64k 32k 16k 1k

    pah, luxury!

  • WizardStan (unregistered) in reply to SomeCoder
    SomeCoder:
    Ok well then I pick the $60 and immediate fix.

    Oh wait, it's not immediately fixed. It just got some duct tape on it.

    Ok so it's been another 5 months now it's crashing daily again. Oh crap, what now? $60 more and another immediate "fix" ?

    Ok, 5 months later and it's crashing daily again...

    Repeat ad nauseum.

    Sorry but the 5 month fix is going to be worth way more in the long run. The app will run better, customers will be happier and you won't have to worry about the server crashing. We might even be able to create another site and put it on the same server because of the extra resources we saved.

    As for boxed apps vs web apps, I still think it's relevant. Both the boxed app that requires the end user to have 8 GB of RAM and the web app that barely functions is going to piss off the end user. The result is the same: users who don't like and don't use your product.

    So now you are the manager. Which do you pick: income flowing from end users or nobody using your website and getting laid off when the company "restructures" ?

    I pick neither, because they're not choices, they're outcomes from choices based on potentially flawed information. You've made a lot of assumptions about the project. In my case, I've already done the projections, and 4 gigs of ram will easily last it for the next 6 or 7 years based on expected growth. I even factored in CPU and network requirements, and it's definitely the RAM that's the bottleneck, but unless business suddenly booms, it won't be a problem. If business suddenly booms, the antithesis to your argument against the quick fix, then the server will need to be replaced sooner, but hey, business is booming, and a few extra thousand for a new server still beats the $100'000 in developement costs. It's actually not that high, closer to $20'000 to be honest, but even so, new system is still cheaper, and further down the road which means that the $20'000 being saved now is making money now.

    As for the article, I'm going to make an assumption of my own, and then back it up with logic: 2 gigs of ram would have allowed them to survive for at least 20 months. If the problem is purely memory leakage, that is, the entirety of the 250 megs they saved was from memory leaks, then that's 250 megs per day lost. 2 gigs = 250 megs app overhead + (250 megs per day) * 7 days. Put in a crom job to restart every sunday morning, and they survive indefinetly. I'd wager that it was, in fact, closer to 500 megs of overhead, and only a few megs of leaked memory, but that only improves the figures. On the other hand, if it's a scalability problem, more pages and more users means more memory, then my simple assumption is that in 5 months it took to work in the fixes, the frequency of reboots did not increase drastically. If we assume that the entirety of the 500 megs was in "cached" data (ie, the app had no overhead at all, with each page/user taking up a portion of that full 500) then the user base can continue to grow linearily for 4 times as long (20 months) before bumping into the edge of RAM. If they took a few hours to sit down and work out the prospects, they might decide on 4 gigs instead and schedule a server upgrade in 3 years to something that can handle more if they need it. I would be surprised if a web server continued to grow indefinitely like that, however. My experience is that they tend to plateau eventually.

  • Mr (unregistered) in reply to Kazan
    Kazan:
    512MB to run a webpage is SMALL for J2EE?

    It depends on the application. It all depends on the complexity and the number of users. At my old work, we once ran a small J2EE server on a desktop that was used by someone else to program, browse the internet and email. The server ran in the background while it was in development, and was actually used quite a bit before we moved it to a dedicated server. The machine was running Windows 2000 with 256MB RAM!

    Kazan:
    no i know why i write php... rarely have to up the memory limit over 8mb

    You are talking about the memory limit for a single PHP process. The numbers above are for all the processes combined, including the actual java based web server.

  • lokey (unregistered) in reply to SomeCoder
    SomeCoder:
    Edward Pearson:
    You would pay a TEAM of people for FIVE months, to do something that could be achieved in <15 minutes and 60 dollars?

    Pray you're never put in charge.

    So what happens when your web site goes from thousands of hits per day to millions and millions per day? Then all that extra RAM you bought fills up and you are stuck with the server crashing again.

    I can think of quite a few bloated apps that you must have been in charge of.

    Naw, he's an M$ OS developer...

  • The Heretic (unregistered) in reply to Anonymous
    Anonymous:
    The computer had 512MB of ram. What production computer has 512MB?
    Maybe the story was from a few years ago.
    Wilhelm's solution was one of many. It was the correct way to do it from a University Computer Science department standpoint, but businesses don't operate that way. You don't sink 25 man months into problems with such an unbelievably small scope.
    We don't know what the scope really was or the importance of the application.

    Too many companies today take the quick and dirty way out for fixing problems that are core to their business. The fix maybe works for a short while but if they don't address the real problem then it usually comes back - often in a bigger way. Code cruft happens in the real world, but eventually it needs to be cleaned up. Badly architected code needs to be refactored. The sooner the better.

    Building more and more badly crafted code on top of previously crafted code is not the solution that will work in the long run, but it is how too many companies work nonetheless.

    The correct solution would have been to upgrade the production server while bug fixing the memory leaks. It just got taken too far. No suprise from a company that would let this happen in the first place.

    Wilhelm was the team leader but still an unfortunate player in all of this. The responsiblity of failure wasn't on his shoulders but on the management of said company. This should have never happened.

    I agree that they should have upgraded the RAM as a temporary fix, then gone in and fixed the code as possible. Then the code should have been cleaned up as was possible. The problem is that most companies would not have let the code clean up proceed; they would have said "the extra memory fixed the problem, why should we invest in cleaning up the code?"

    The server would have had to be rebooted once a week instead of once a day and as they added functionality it would have just gotten worse and worse.

    So, it is a good thing that the code was cleaned up (not sure about the methodology, or the time spent, but it was better than just letting it get worse).

    There is a balance between doing what is necessary in the short term and what will keep you in business in the long term. Any company that doesn't address both short and long term issues will eventually go out of business.

  • K&T (unregistered) in reply to anon
    Economics: you fail it!

    Management spent 2 man-years worth of time that could have been used on other projects. Those man-years aren't free.

    Management skimped on those 2 years during the initial write with every intention of spending the time doing things correctly once the app was deployed. They made a conscious choice to sacrifice qualify for speed to market with the hope that the amount of customers turned off by a shitty product wouldn't out weigh the number of customers who wanted to get their hands on the project.

  • k6 (unregistered) in reply to Jay

    This is going too far...

    I've only one bit of memory in my brain, you insensitive clod !

    1

    0

    1

    impressed ? eh !

  • (cs) in reply to Anonymously Yours
    Anonymously Yours:
    You can't build a castle in a swamp regardless of the time-to-market speed.

    Listen, lad. I built this kingdom up from nothing. When I started here, all there was was swamp. Other kings said I was daft to build a castle on a swamp, but I built it all the same, just to show 'em. It sank into the swamp. So, I built a second one. That sank into the swamp. So, I built a third one. That burned down, fell over, then sank into the swamp, but the fourth one... stayed up! And that's what you're gonna get, lad: the strongest castle in these islands!

    (But I don't want any of that...)

  • (cs) in reply to Bob Johnson
    Bob Johnson:
    ounos:
    TRWTF is that he thought "staticalizing instance methods" actually reduces memory footprint.
    Correct me if I'm wrong, but every instance method has an implicit parameter added (I'm assuming this is C# based on a quick google search). Switching an instance method to a static would remove the "this" parameter, and reduce the junk on the stack. It may be a small performance gain, but those bytes can add up (my work is designed for small devices, so I may be nitpicking here).
    They were getting OutOfMemoryErrors, not StackOverflowErrors.

    I find it particularly hard to believe that they did (new SomeClass()).method(foo, bar); which, while surely a possible WTF, can't cause out of memory errors either (and the gc overhead typically is really minimal for such cases).

    -edit- Dang. I just repeated this: http://thedailywtf.com/Comments/That-Wouldve-Been-an-Option-Too.aspx?pg=3#206640

  • lokey (unregistered) in reply to Lawrence
    Lawrence:
    Double bah! Real programmers started with a 1K ZX-81, and used machine code! <grin>

    Triple bah! Real programmers cut out or installed jumper wires to modify the logic of a "program"!

  • Ikke (unregistered) in reply to WizardStan
    WizardStan:
    A week, perhaps, if there's a lot of paperwork to fill out and sign. Even a month, if it's a particularily troublesome company, is still less. I'll agree, rewriting was the right solution from a developer standpoint, but there are times in real world business where you have to step back and say it'll just cost too much to fix.
    You don't work for government or a multinational company do you?

    It is not uncommon to have requests take 5 months to come back signed and approved. That's if it's approved. Otherwise you have to repeat the request, and it may take a little longer the second time.

    Yes, I worked in a smaller company (50 employees), where if I called the CEO (he had the money, but officially wasn't even my boss -I was an independent entity) to tell him of the purchase. Yes, that's one minute.

  • (cs) in reply to Fedaykin
    Fedaykin:
    Sorry, but the WTF would be just throwing more memory at the problem, because all that would do is make the time between OOM errors longer. It wouldn't solve the core problem of poor resource management in the system.

    Now, from TFA, I would guess there is a big chance that the effort was way overboard, but the core intent was correct: fix the source of the problem.

    Throwing more memory at the problem is only yet another duct tape and chicken wire solution.

    Yea, but if it works you just saved yourself tens of thousands of dollars.

    Sure, we'd all like to run nothing but perfect code, but in the real world that's not always an option. I dealt with a system once, real piece of...work...And the only real solution was to shoot it in the head, bury it in a shallow grave, and start from scratch.

    The company that was wedded to the unfortunate system was unwilling to do this, and everyone they called in to fix it told them the same thing. So, eventually they call me in, and I say, "Yea, it's a dog. You should toss it."

    They say, "Is there no other possible solution?" I say, "Well, have you tried throwing more hardware at it?" Blank looks all around

    So I went out and spent 10,000 dollars on a new huge kickass server, and the goddamn thing ran like a champ. Yea, it sucked up a ton of memory, yea it was an unmaintainable piece of crap. But I saved them a quarter million dollars, bought them five years time to shop a new solution, pocketed a fat fee, and left a happy customer behind.

    When the only answer you're willing to give someone's question is, "Pay me to rewrite it from scratch" you're just trying drum up business for yourself. There is always another way.

  • WizardStan (unregistered) in reply to Ikke
    Ikke:
    You don't work for government or a multinational company do you?

    It is not uncommon to have requests take 5 months to come back signed and approved. That's if it's approved. Otherwise you have to repeat the request, and it may take a little longer the second time.

    Yes, I worked in a smaller company (50 employees), where if I called the CEO (he had the money, but officially wasn't even my boss -I was an independent entity) to tell him of the purchase. Yes, that's one minute.

    Actually I work for a multinational contracting firm doing work for a multinational bank. Even so, I've never seen requests take more than a few weeks to be accepted or approved. And it's basically the same whether it's a hardware or software change anyway. A project is a project, at least around here. Maybe it's different for you, but if it takes 5 months to get approval to add more ram, wouldn't it also take about 5 months to get approval to start a new software project? In that case, the argument is moot, since the approval overhead is the same either way and can be cancelled out.

  • Anonymous Coward (unregistered) in reply to anonymous

    Hah! Try 768 bytes (512 12-bit words), not of memory but of instruction space! That was just last year, too.

  • Ben (unregistered) in reply to tuna
    tuna:
    TRWTF is that an intern had the balls to actually say that in a room full of his superiors. awesome.

    I wouldn't want to work anywhere where i couldn't. (and I haven't)

  • noah (unregistered) in reply to anonymous

    3.5k!? That would've been heaven! I started with a whopping 1k on my ZX-81.

  • Joe (unregistered) in reply to Erlingur
    Erlingur:
    Write-only memory? I really, really hope you're kidding...

    Captcha: nobis (seems quite fitting)

    There was a post on Slashdot a couple of years ago specifically asking the community about open source solutions to write-only storage. For some federal reporting reasons (SOX?) it was required. Logs had to be kept that were write-only. And the system had to be certified, so no home brew garage built solutions. They wouldn't stand up in court.

    "I really, really hope you're kidding..."

    No, there's more to life than what we expect. We get surprised every day.

  • Fedaykin (unregistered) in reply to Satanicpuppy
    Satanicpuppy:
    Yea, but if it works you just saved yourself tens of thousands of dollars.

    Today. What about tomorrow? Now. if this is an application that doesn't matter and has a short life cycle or is already beyond hope, then sure apply some duct tape. However, if this is a mission critical application with a long expected lifetime, then by spending the money required to fix the core problem now will likely save you many fold in the long run. Of course, it all depends on the actual situation, but the story gives the impression that this was an early life application that made up the core of their business and thus really should not have a fundamental problem fixed with duct tape.

    A stitch in time saves nine.

    Sure, we'd all like to run nothing but perfect code, but in the real world that's not always an option. I dealt with a system once, real piece of...work...And the only real solution was to shoot it in the head, bury it in a shallow grave, and start from scratch.

    It's not about wanting to run perfect code. It's about applying the right solution to the problem. Sometimes that solution is duct tape, sometimes it's not.

    I saved them a quarter million dollars, bought them five years time to shop a new solution

    This is exactly what is being suggested. Apply the duct tape AND start working on a real solution. TFA implies that only the application of duct tape was warranted. That's what is objectionable.

  • Joe (unregistered) in reply to CoderHero
    CoderHero:
    I think that nobody has bothered to realize that the WTF is that he spent 5 months with 5 people and only got a 50% reduction in memory use (the article doesn't really state that there was a leak)

    With 2 developer years of effort I would have hoped for something closer to an order of magnitude!

    ???

    Totally depends on: lines of code complexity experience of the developers

    Care to share those details with us? Because I don't remember the article stating anything except for that it was complex spaghetti code

  • phil (unregistered) in reply to WizardStan
    WizardStan:
    SomeCoder:
    That's still $100 I'd rather spend on something else than being required to put more memory in my computer because some programmer thought his time was worth more than mine. Yes, that's a little selfish of me but think about it: do you value your time more than mine? I'm sure you do, just as I value my time AND money over some other programmer's that I don't even know :)
    That's irrelevant to the story and my own anecdote. In these cases, the funds for the RAM and the developers comes from the company. We're not talking about a boxed app which is going out to clients. These are apps developed in house with specific purposes in mind. One way or another, you're paying for the solution. You're the boss then. Which is more valuable: $60 and immediately fixed, or $100'000 and fixed in 5 months?

    It's still relevant if you think $60 is the total cost of simply throwing in the memory without fixing the underlying problem. It's in-house, so no, you won't have customers having their time/resources wasted when things screw up again (a possibility you should assume is quite likely if you never bother to fix the crappy code) -- you'll just be wasting your own company's time and resources when it screws up again. Why should those costs not also be considered when deciding not to try to fix the underlying issues? The cost of the RAM is $60. The trick is that the cost of not making any improvements to the software is not $0.

    All those costs may still end up being cheaper than a real fix. In most cases, they'd probably be cheaper than this particular redevelopment (5 guys, 5 months). Or, this company may have been large enough, and this app important enough to them, with a long enough expected lifespan, that the thorough overhaul was a good use of their resources (except then they still should've added the RAM to get it running better in the meantime right up front). But that's basically beside the point: there are choices between "do nothing to fix it" and "devote five people to five months of working on reducing memory usage full-time." But the risks of completely ignoring the underlying problem after applying the short-term fix are way out of proportion to the cost of, say, at least looking into what the apps' biggest problems are, and then having the information you need to be able to make a reasonably informed decision about whether or not it's worth fixing some of those problems. Otherwise you're just hoping that your program, which is already exhibiting unexpected behavior, will not exhibit any worse unexpected behavior if things change in the future. 90% of the time this may be the case, but think of it like insurance: spending more money than you "have to" in order to limit the chance of total disaster.

    Dealing with risk is important -- it's not something that a business can wisely ignore (well, at least if they're not in the sort of monopoly position where it doesn't matter how badly they screw themselves over, their revenue is pretty much guaranteed). Bad, unpredictable code is uncertainty. Uncertainty is risk. If you're just trying to buy yourself some time to replace it, at least do some actual legwork to verify that it should by as much time as you expect and give yourself plenty of margin for error.

  • jas88 (unregistered) in reply to Nerdguy
    Nerdguy:
    Funny story. Years ago I worked at a company that managed the website for a large publishing concern. Let's call that publishing concern "Petrel Publishing". They used an unweildy CMS written in tcl/tk (You can guess which one), running on Solaris boxes. We had recently moved the site to a new set of servers and location, and things were going awful. They'd crash outright every time the traffic went even slightly up. Hard deadlock. Doom. Even with 2GB per server, and quad UltraSparc CPUs. So we'd just log in and restart the server and blame the damned CMS, until eventually the bosses "Somewhere in EMEA" got tired of it and demanded we fix it post haste.

    So all the young unix nebbishes are sitting around trying to think of reasons that the software sucks, and whether we could get away with cron'ing something up, when our CIO, who'd started programming back when Data General still made hardware, comes in, listens to us for ten minutes, squints a bit and says "What about ulimit?".

    Now, we were all Linux admins. Sure, we'd used Solaris, and adminned boxes that had already been set up, so we were a bit unfamiliar with Solaris' ulimit settings.

    By default, the rlim_fd_max on solaris is 64. So any process can only open 64 concurrent files. Now, the CMS needed to generate and serve up thousands upon thousand of files, and would just choke when asked to do so.

    Needless to say, we upped it to 8k, and the site never crashed again.

    So here's to you, old-timers.

    On a similar note, a few years back a friend had written a proxy server for rendering foreign character sets (Chinese/Japanese IIRC) using images (back when Unicode support was very limited, if that good). Being an experienced programmer who knew about high performance HTTP daemons (having just completed the same course as the guys who started Zeus), he'd written his server using select() and non-blocking I/O, and everything was fine on all the servers across the globe his server ran on - except one Solaris box in Australia, which kept failing under load. Yep, resource limit on file descriptors... He was very happy to learn of the ulimit command!

    (Captcha: sino. Since this post's about a Chinese application, it seems somehow appropriate!)

  • (cs) in reply to Fedaykin
    Fedaykin:
    Sorry, but the WTF would be just throwing more memory at the problem,...

    I think it was Stalin who said: Quantity has a quality of its own.

    Solving the problem is the first priority: the site needs to be up. If adding $60 in RAM would make it stay up, then do that.

    Then look at actual memory usage and refactoring and such. If the increase in RAM gives you breathing room, maybe a re-design or purchasing another package is better for the company.

    First, figure out what the goal is. Next, pick the strategy. Duct tape and chicken wire was the appropriate material for Operation Quicksilver.

  • (cs)

    I don't usually complain about the stories that go on to here, I don't even really complain about Manslavery Feck Day (tho they are starting to get better now)... but the fact is, if you are using over double the amount of RAM you need because of poor application management... you need to have an overhaul of how memory is managed.

    Infact if the server only had 1GB, doubling it too 2GB would probably have only reduced the occurance of the problem as their were likely unknown memory leaks within the system that would have just taken the application up to using 2GB of RAM anyway.

    William did the right thing, the system needed an overhaul. However the system could have been upgraded to 2GB at the time as well to improve customer service for the mean time until the problems were actually resolved.

    I don't see any WTF in what william did and just following more and more quick fixes the system would likely have just got worse and more unmanagable as time proceeded.

  • Bob (unregistered)

    Yeah, but if the original problems were caused by a memory leak then increasing memory would have only delayed the problem, it would still eventually exceed the 2GB available.

  • Too much TV (unregistered) in reply to webhamster
    webhamster:
    Hypothetical:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.
    You have only 1 KB of memory. Now what?

    I build an Apollo Command Module and a Lunar Module and fly to the moon. That's a whole lot more memory than those systems had...

    /kids today

    Nope. Just watched the Moon landing stuff on Discovery Science last week, the main guidance computer used about 72k of memory. Of course, that was for Data, Programs AND the OS. Which they they of course had to write themselves. The program was "written" by knitting copper wires thru and around memory core rings. If the copper went thru the ring, it was a 1, if it passed outside the ring, it was a 0. The resulting knitted cable was then snaked around and attached to boards that were installed into the spacecraft.

    the programmers weren't even the engineers that designed the program, they contracted women to do the knitting because their hands were smaller, more dexterous and used to doing that kind of work in the 1960's.

    Fascinating bit of history.

    Cheers!

  • Jeff Rife (unregistered) in reply to Kazan
    Kazan:
    SomeCoder:
    I don't think 2 GB of RAM is going to come down in price THAT much in 7 years. Currently, the price of 2 GB of RAM - at the cheapest place I know of - is about $120 which is a lot more than $20.
    i just bought 2GB of DDR2-800 for $50
    And, you can get 4GB (a pair of 2GB DDR2-667) of ECC server RAM for $100 shipped.
  • Christian (unregistered)

    While that was probably an embarrassing moment, I wouldn't consider it a wasted investment. Bob made a point, but that's not the right attitude to have. Leaks are leaks, and adding more RAM is just a band-aid. Just because hardware is cheap and powerful, doesn't mean developers can afford to get lazy and write sloppy code.

    That's like saying Vista's problems will disappear if you throw enough RAM at it.

  • RGB - my real initials (unregistered) in reply to anonymous

    4k of 16 bit magnetic core memory on a Varian 620-L.

    Bitswitches on the front, to load the bootstrap loader.

    ASR-33 with a tape reader and punch. 110 baud.

    I had to disassemble and rewrite the assembler just to get the editor AND assembler into memory at the same time. I had to create new assembly language mnemonics to implement commands the designers never thought of. Transfer (no source) incremented to B. Saves an entire 16 bit byte over Clear B, Increment B.

    Real Programmers Use Bitswitches.

Leave a comment on “That Would've Been an Option Too”

Log In or post as a guest

Replying to comment #206770:

« Return to Article