• (cs) in reply to Wilhelm

    Write-only memory? Well, that must be useful.

  • Lawrence (unregistered) in reply to FredSaw

    Double bah! Real programmers started with a 1K ZX-81, and used machine code! <grin>

  • (cs) in reply to Erlingur
    Erlingur:
    Write-only memory? I really, really hope you're kidding...

    Captcha: nobis (seems quite fitting)

    There seems to be a positive correlation between captcha-posting and abject cluelessness.

    On old 8-bit computers, with their tiny 64K address space, it was not at all unusual for some memory locations to be, in a sense, overloaded. You'd write to a location to set some hardware state, and read from it to retrieve some completely unrelated value.

  • Experience Talking (unregistered) in reply to DJ

    "Most of them are not that well versed with what is going on besides what they actually know." OK, as opposed to the other kind that is well versed with what they don't actually know?

    Sorry, after 20 years of working with all experience ranges, I have seen only at most a slight correlation of oldness of school with willingness to consider the solutions of others. Everyone's opinionated to some degree.

    I'll agree that everyone needs to work at keeping up with what's going on. C# and Java programming (my current work) are quite different from programming 6502 machine language in the 1K provided on a KIM-1 (my first machine).

  • Wilhelm (unregistered) in reply to Zylon
    Zylon:
    Erlingur:
    Write-only memory? I really, really hope you're kidding...

    On old 8-bit computers, with their tiny 64K address space, it was not at all unusual for some memory locations to be, in a sense, overloaded. You'd write to a location to set some hardware state, and read from it to retrieve some completely unrelated value.

    This is what I was going to say.

  • Wilhelm (unregistered) in reply to Matthew
    Matthew:
    Can't remember.

    Right here. The winner.

  • bramster (unregistered) in reply to Hypothetical
    Hypothetical:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.
    You have only 1 KB of memory. Now what?

    two 7-track tape drives. . .

  • frustrati (unregistered) in reply to ClaudeSuck.de
    ClaudeSuck.de:
    Write-only memory? Well, that must be useful.
    Hey, most of my brain's memory is write-only. Either that, or it is some sort of circular buffer.

    Afaicr, the C64 had a number of I/O mapped memory that could only be POKEd*, not PEEKed*. Most of the SID* was like that. The VIC* had memory locations that when written specified requested behaviour, while when read would tell the status, e.g. the current raster line.

    *POKE: The BASIC instruction for storing an 8-bit value in a specific memory address. *PEEK: The BASIC instruction for reading an 8-bit memory location. *SID: The sound controller chip. *VIC: The video controller chip.

    Wow! A blast from the past!

  • AndyC (unregistered) in reply to Scurvy
    Scurvy:
    I'm sorry, did you say write-only memory? Whereabouts can you buy that these days?

    /dev/null

    There is tons of it there.

  • bramster (unregistered) in reply to TGV
    TGV:
    FredSaw:
    Bah! Real programmers started with 16K.
    No, real programmers started with 4K words, but that's another story.

    Actually, we started with 4-bit words

  • DL (unregistered) in reply to Scurvy
    Scurvy:
    Wilhelm:
    Commodore 64s only had 38k of RAM. The rest was write- or read-only memory

    I'm sorry, did you say write-only memory? Whereabouts can you buy that these days?

    At Signetics, apparently.

    http://en.wikipedia.org/wiki/Write_only_memory

    I also seem to recall W.O.M. being a plot point in a David Brin novel.

  • Mike (unregistered) in reply to Hypothetical

    Hunt the Wumpus!!

  • David Leppik (unregistered) in reply to frustrati
    frustrati:
    Bob Johnson:
    ounos:
    TRWTF is that he thought "staticalizing instance methods" actually reduces memory footprint.
    Correct me if I'm wrong, but every instance method has an implicit parameter added (I'm assuming this is C# based on a quick google search). Switching an instance method to a static would remove the "this" parameter, and reduce the junk on the stack. It may be a small performance gain, but those bytes can add up (my work is designed for small devices, so I may be nitpicking here).
    It could also be that code like this:
    (new SomeClass()).instanceMethodThatShouldBeStatic();

    Was reduced to:

    SomeClass.instanceMethodThatShouldBeStatic();

    Which should drastically reduce the GC's workload.

    I agree with the countless people that state that Wilhelm should focus on fixing memory leaks (and tons of temporary objects), but not focus too much on getting the app below 512MB. You can probably fit a webapp in less than 2KB, but why bother? As with so many thing, it is about finding the balance.

    If the stack is an issue, a StackOverflowException is likely to be the problem rather than an OutOfMemoryException. The stack is usually tiny.

    Creating and destroying temporary objects is something Java does very efficiently, using a thread-local "eden" which is easy to wipe. Different Java versions vary, and it could hypothetically be the issue. However, since lots of temporary objects has always been the recommended programming style, it's something Sun has always been careful about. And again, it's unlikely to cause an OutOfMemoryException, since the garbage collector attempts to run before calling that exception.

    (On the other hand, if huge temporary objects are constructed recursively, that could cause all sorts of OutOfMemoryExceptions.)

    The most likely issue is with cached data being held too long. And in that case, static methods could be a problem rather than the solution, if it means static references to data that used to be garbage collected.

    I've had similar issues, since I work on a memory-hungry web application. The difference, though, is that the memory usage is intentional, as we store huge sections of a database in memory for on-the-fly statistical analysis and other SQL-hostile queries.

  • Tom (unregistered) in reply to tuna

    As far as I can tell, the suggestion is that it wasn't leaking memory, it just needed more than 512 for what it was doing. It's like spending hours squeezing every article of clothing into a suitcase, using complicated folding techniques, when you could just get a larger suitcase for ten bucks. If it was leaking, then fixing the leaks is surely more important than getting the normal running memory usage down, especially as that's likely to make your code unmaintainable.

  • Plexyglazz (unregistered)

    Wow, he actually managed to get that through management AND produce fixed code? Wilhelm should get a medal and a carte blanche to walk into any software company to start kicking their rear-ends! (Yes, I mean you Adobe!)

  • Wilhelm (unregistered) in reply to Gort
    Gort:
    Bloody n00b BASIC programmers.

    Hey watch it. I wrote a bitchin side-scroller for my C64 in machine code.

    Took me ages to copy out all those DATA lines from Zzap64 magazine...

  • (cs) in reply to bramster
    bramster:
    TGV:
    FredSaw:
    Bah! Real programmers started with 16K.
    No, real programmers started with 4K words, but that's another story.

    Actually, we started with 4-bit words

    Actually, we started with ferric rings.

  • WizardStan (unregistered) in reply to ClaudeSuck.de

    My recet application takes 500 megs to start. Why? No idea. It caches data records, but with 0 records, it uses up 500 megs of ram. Could I improve it? Sure, but how long will that take? 5 months with a team of 5 perhaps? Or should I just plunk in a $60 memory chip and leave it be? As said, it caches data records. Each record is about 50k in size. I could probably improve that as well, but should I? It currently handles 5'000 records. 750 megs of ram. We're expecting that to grow to 30'000 by 2010. 2 gigs of ram! And I'm out again. Guess I should have improved it when I had the chance. Or, because it's 2010, another 2 gigs of ram will only cost me $20 (or less, who knows?). DONE! More memory. Expectatios have us hitting 100'000 records by 2015. Uh oh: 5.5 gigs. No good now, hard limit is 4 gigs. Solution: Buy a new box. I'd bet dollars to doughnuts that 8 or 16 gigs of ram going into a box would be no problem by 2012 or so, at a cost of maybe a few thousand. Does any of that solve the underlying problem? No, memory requirement is still far higher than it probably could be, but a cost of a few (even ten or twenty) thousand for a new server is far cheaper than 2 man years of recoding effort.

    ClaudeSuck.de:
    Write-only memory? Well, that must be useful.
    It is, actually. Microcontrollers often use write-only memory for control flags and output pins. There are actually a lot of uses under the proper definition as well.
  • (cs) in reply to Numeromancer
    Numeromancer:
    bramster:
    TGV:
    FredSaw:
    Bah! Real programmers started with 16K.
    No, real programmers started with 4K words, but that's another story.

    Actually, we started with 4-bit words

    Actually, we started with ferric rings.

    Wait, no, we started with null and void, and have come full circle!

  • Anonymous Cow-Herd (unregistered) in reply to Leon
    Leon:
    Your interviewer is an undead zombie and cannot be killed.

    You'll just have to wait until the process dies of its own accord or reboot the machine.

  • SomeCoder (unregistered) in reply to Tom
    Tom:
    As far as I can tell, the suggestion is that it wasn't leaking memory, it just needed more than 512 for what it was doing. It's like spending hours squeezing every article of clothing into a suitcase, using complicated folding techniques, when you could just get a larger suitcase for ten bucks. If it *was* leaking, then fixing the leaks is surely more important than getting the normal running memory usage down, especially as that's likely to make your code unmaintainable.

    "Running profilers and stress testing tools they were able to identify trouble modules, unreleased resources, and unclosed connections."

    Unrelease resources and unclosed connections sound like leaking memory to me. I could be wrong but that's what I got out of it.

  • george (unregistered)

    This WTF reminds me of the Pizza Hut Pizza Mia commercial, in which this guy (Wilhem) works very hard producing a "discount pizza availability chart" in order to try and save his friends some money but this pompous douchebag (Bob) tears it down because he wants to be a tool and pay $5 for Pizza Hut's Pizza Mia deal thing.

  • Danodemano (unregistered) in reply to AndyC

    LMAO, classic!

  • phil (unregistered) in reply to jtl
    jtl:
    While optimizing code is all well and good, management cares about the bottom line. If the memory usage slowdown ends up being negligible after a cheap RAM upgrade you look pretty stupid for wasting time and money when a quickfix would have kept the trains on time. You say eventually that wouldn't be enough, but then maybe they buy a new server with twice that much ram for 100 bucks. You don't know for sure, and worst case they end up optimizing later but maybe they can get away with it. If the hardware is cheap, then don't blow money optimizing the software first.

    I know you all live in Utopias though where it's okay to waste money on non-noticeable improvements, but the intern just pointed out that Wilhelm possibly wasted a lot of money.

    Results are all that matters in business.

    But it's not good decision making to just compare the cost of improving the system itself against the immediate cost of the quick and dirty fix. Maybe that will be the only cost, but that maybe is the giveaway that you need to take uncertainty into account when choosing the solution. The bottom line is what really matters, but if the expected cost of the quick and dirty fix is higher than the expected cost of the system overhaul (ie, it greatly increases the chance of having to do a significantly more expensive re-haul down the line), then the better choice is to spend the money on an overhaul now to maximize your long term profitability. But too many managers don't seem to know about that sort of analysis (or just don't want to bother with the work required, which is a lot harder than just saying "RAM costs less than five months of work").

  • Alex (unregistered)

    TRWTF is that other solutions were not considered :P Sure a RAM increase may have been only a temporary solution, but it wasn't even considered. The ONLY solution considered was an entire rewrite of code.

    If I was in the same situation, and I knew that the rewrite of code was the correct decision, my face would still be red since I didn't research any other options.

  • Wilhelm (unregistered) in reply to george
    george:
    This WTF reminds me of the Pizza Hut Pizza Mia commercial, in which this guy (Wilhem) works very hard producing a "discount pizza availability chart" in order to try and save his friends some money but this pompous douchebag (Bob) tears it down because he wants to be a tool and pay $5 for Pizza Hut's Pizza Mia deal thing.

    ..and subsequently all the dudes die years later of heart disease (Memory Leak, Postponed).

  • SomeCoder (unregistered) in reply to WizardStan
    WizardStan:
    My recet application takes 500 megs to start. Why? No idea. It caches data records, but with 0 records, it uses up 500 megs of ram. Could I improve it? Sure, but how long will that take? 5 months with a team of 5 perhaps? Or should I just plunk in a $60 memory chip and leave it be? As said, it caches data records. Each record is about 50k in size. I could probably improve that as well, but should I? It currently handles 5'000 records. 750 megs of ram. We're expecting that to grow to 30'000 by 2010. 2 gigs of ram! And I'm out again. Guess I should have improved it when I had the chance. Or, because it's 2010, another 2 gigs of ram will only cost me $20 (or less, who knows?). DONE! More memory. Expectatios have us hitting 100'000 records by 2015. Uh oh: 5.5 gigs. No good now, hard limit is 4 gigs. Solution: Buy a new box. I'd bet dollars to doughnuts that 8 or 16 gigs of ram going into a box would be no problem by 2012 or so, at a cost of maybe a few thousand. Does any of that solve the underlying problem? No, memory requirement is still far higher than it probably could be, but a cost of a few (even ten or twenty) thousand for a new server is far cheaper than 2 man years of recoding effort.

    I wouldn't say that's a good excuse for poor code. I'd also say that your dollar estimates, even for 2015, are way too low. I don't think 2 GB of RAM is going to come down in price THAT much in 7 years. Currently, the price of 2 GB of RAM - at the cheapest place I know of - is about $120 which is a lot more than $20.

    Also, in the meantime, your users suffer because the server crashes a lot and when it doesn't crash, it takes forever to load pages. That's a large amount of lost revenue that you could be gathering but aren't because users can't see/click on ads and/or go to competing products.

  • CoderHero (unregistered) in reply to SomeCoder
    SomeCoder:
    CoderHero:
    SomeCoder:
    CoderHero:
    SomeCoder:
    Yes, I completely agree with what has been said here. Wilhelm was completely correct in doing what he did. Yes, he was maybe a bit hard nosed about things but I tend to agree with him on one point: programmers are lazy asses these days. ...

    It's not that we're lazy these days. As a developer who works for a living, suppose I have a choice when I choose to implement something: a) An easy, and reasonably fast, solution that I know will work but will require an extra 2 Mb of RAM (maybe temporarily or not). b) A much more complicated, similar speed (possibly slower), algorithm that needs a negligible amount of RAM. c) A very complicated algorithm that is just as fast, maybe faster, and has no significant RAM requirement.

    Given that today computing power and RAM are both VERY cheap I'll choose the solution that's the easiest. My time is expensive compared to computing costs. A more complex solution will likely have more bugs in it as well.

    The thing to compare is that 20 or 30 years ago, cheap RAM and cheap computing were not available! Different set of criteria placed on developers.

    One last point is that whatever you think your product has to scale to may not actually every be. Just because there's the potential for 50 simultaneous million users doesn't mean it's going to happen.

    Ok, so let me pose a different question to you: What is the end user's time worth to you?

    My time is also expensive. If I have to sit and wait for an app because it requires 4 GB of RAM and I have 1 GB, that's wasting MY time. Sure, your time was saved but which time is more valuable? In this case, I'd say it was the end user's time because if your app thrashes and requires a lot of RAM, he's going to want to use the competing product that doesn't. Or he'll go to a competing website.

    My computer is also expensive. Hardware is relatively cheap but it's not THAT cheap for the average user. You think the soccer mom who uses your app likes to go out and buy sticks of RAM just because some app is bloated? You think she likes to wait while your server crashes because the app leaks memory?

    I don't mean this post as a flame at all, but I think that if more programmers would start caring about the end user's time more than their own time, we'd have a lot more quality apps out there.

    The problem there is that it's almost impossible to measure. I know how to measure my time and what computing requirements cost.

    For Joe Sixpack who's unemployed, I can say his time is worth almost $0. For Bill HighflyingCEO, his time might be worth $100/hour.

    I don't know about you, but I prefer to take measurements of things I can measure, or reasonably estimate, rather than vague and fuzzy numbers.

    It's pretty easy to measure. There are two ways:

    1. Is your product getting bashed in reviews? Do you know of one person on the planet who actually likes it? Ok, you failed. I'm looking at Vista for the specific example here. The only person I've ever heard of that likes it is Microsoft Fanboy Jeff Atwood. Every other review of it is that it's a bloated piece of crap.

    2. Try using your product on a low end machine. And by low end, I mean something that you might not be able to purchase anymore. Is it incredibly painful? You just measured the cost of your time as it relates to using the app. Not everyone has dual core 2.4 GHz processors and 4 GB of RAM. My machine at home (which I am replacing soon because it's painful) only has 512 MB of RAM.

    Another way is to gauge how well your product is selling. MySpace does incredibly well (somehow) despite the fact that it's the most poorly designed web app ever.

    There are exceptions but I for one, believe that we need to get back to making applications that are of high quality as opposed to just saying "Eh, in the future 32 TB of RAM will be the norm so who cares!"

    Keep in mind that in general when you start a project you can choose from one of these three as a goal and you'll usually get it.

    1. Cheap (low production cost)
    2. Fast (low resource needs ie memory)
    3. Good (low maintainence/very extensible software)

    In some rare instances you might get two of the three, but it comes at a severe expense of the one you chose not to take. If you try for too much you get a crappy product all around. Most development shops aim for 1 or 3 because resources are cheap!

    "Hey we've got this great new product. It doesn't have the features of the other guy, but we only need 1/10 of the RAM of the other guy."

  • Wilhelm (unregistered) in reply to Alex

    TRWTF is why they didnt just: Stave off problems with more memory (IF indeed this would have worked) Fix the memory leaks without splurging 5 months/a large team on it.

    Pragmatism allied with... the other thing I dont know the term for.

  • Anonymous (unregistered)

    The computer had 512MB of ram. What production computer has 512MB?

    Wilhelm's solution was one of many. It was the correct way to do it from a University Computer Science department standpoint, but businesses don't operate that way. You don't sink 25 man months into problems with such an unbelievably small scope.

    The correct solution would have been to upgrade the production server while bug fixing the memory leaks. It just got taken too far. No suprise from a company that would let this happen in the first place.

    Wilhelm was the team leader but still an unfortunate player in all of this. The responsiblity of failure wasn't on his shoulders but on the management of said company. This should have never happened.

  • k (unregistered) in reply to frustrati

    I recall PEEK-ing at memory locations that didn't exist on the ZX81 - not all of it returned zero...

    The truth is out there....

    Those early ZX80/ZX81/Spectrum manuals (the big chunky spiral bound ones) were some of the best programming books I ever recall reading. But maybe 9-year olds are easy to impress :-) (yeah I am old)

  • morry (unregistered)

    I learned programming on a 128Kb Apple IIe. And you had to go through HUGE hoops to access that second 64K.

    No point to this just wanted to brag.

  • (cs) in reply to ClaudeSuck.de
    ClaudeSuck.de:
    Real programmers free memory with a round-kick over the motherboard.
    That would leave a footprint, all right.
  • SomeCoder (unregistered) in reply to CoderHero
    CoderHero:
    Keep in mind that in general when you start a project you can choose from one of these three as a goal and you'll usually get it. 1. Cheap (low production cost) 2. Fast (low resource needs ie memory) 3. Good (low maintainence/very extensible software)

    In some rare instances you might get two of the three, but it comes at a severe expense of the one you chose not to take. If you try for too much you get a crappy product all around. Most development shops aim for 1 or 3 because resources are cheap!

    "Hey we've got this great new product. It doesn't have the features of the other guy, but we only need 1/10 of the RAM of the other guy."

    I would posit that it doesn't have to be that way. A lot of developers were taught on languages that are too forgiving and so when they build systems, they are lazy and careless. Management who doesn't understand anything about computers at all is also to blame.

    Yes, in the real world you are told to do it fast and who cares about the quality. I work in the real world. I understand it. That doesn't mean we should accept it and act like that's just how it is.

    As for the story, the way I read it was that it had real problems and was leaking memory. Throwing more RAM at it would have just delayed the time before it crashed but it still would have been crashing. In this case, rewriting it to be more efficient is the correct choice.

  • (cs) in reply to ounos
    ounos:
    TRWTF is that he thought "staticalizing instance methods" actually reduces memory footprint.

    1 less functr per instance per function moved from instance to static in java

    (IIRC all non-static functions are virtual in java and therefore in the vtable)

  • Wilhelm (unregistered)

    Presumably the intern's bright idea never got tested, since they'd already fixed the memory problem.

    Hmmmm...

  • (cs) in reply to Erlingur
    Erlingur:
    Write-only memory? I really, really hope you're kidding...

    Are you knocking the Signetics 25120:

    http://www.national.com/rap/files/datasheet.pdf

    Yes, this was actually published in their databook. I love that it needs 6.3VAC....for the heaters :)

  • WilhelmTwo (unregistered) in reply to Wilhelm

    I'd imagine this is why the intern caught them with their pants down in the meeting.

    Should have been able to easily refute the Intern's idea with why it was a horrible idea to postpone a real solution :P Should have crushed that poor Intern's smart ass right there....

  • (cs) in reply to Leon
    Leon:
    Wilhelm:
    krupa:
    Wilhelm:
    Hypothetical:
    You have only 1 KB of memory. Now what?

    Buy some more memory.

    Did you not read the article?

    Someone stole your wallet. Now what do you do?

    Kill the interviewer and steal his wallet.

    Your interviewer is an undead zombie and cannot be killed. Also he has no wallet. Now what do you do?

    Oh great, Bob's my interviewer now? I guess I'd lock myself in a mall. I know, some day I'll be out of memory and money, and I'll have to comment out the call, but I never had the head for all that bigger picture stuff.

  • (cs) in reply to mauhiz
    mauhiz:
    From what I read in this article there was no memory leaks, just excessive memory usage. In this case adding memory is by far the best solution - even if the coding standards will have to wait. 512meg is very few for a J2EE app.

    That may depend on the project life expectancy, but I think the WTF is what the writer of the article meant it to be.

    512MB to run a webpage is SMALL for J2EE?

    no i know why i write php... rarely have to up the memory limit over 8mb, and when i do it's because i'm processing a massive amount of data and I want to have it all loaded simultaneously for fast access

  • Andrew (unregistered) in reply to mrprogguy
    mrprogguy:
    Salami:
    5 months work for 5 guys vs $60. You guys that agree with Willhelm must all be on the receiving end of paychecks and not the one writing them.

    Yes. Management will always do the wrong thing, given the opportunity. Wilhelm was absolutely correct. Bandages don't heal wounds, only cover them. When the application is out of control, it needs to be re-engineered into something that can be scaled and maintained going forward.

    If these 5 guys are salaried employees, then management just saved $60.

  • Fedaykin (unregistered)

    Sorry, but the WTF would be just throwing more memory at the problem, because all that would do is make the time between OOM errors longer. It wouldn't solve the core problem of poor resource management in the system.

    Now, from TFA, I would guess there is a big chance that the effort was way overboard, but the core intent was correct: fix the source of the problem.

    Throwing more memory at the problem is only yet another duct tape and chicken wire solution.

  • (cs) in reply to poke X, peek (X)
    poke X:
    ...you could copy the BASIC ROM into the RAM memory underneath it, page it out, and have a RAM copy of BASIC that you could muck about with.
    Yeah, I used to do that on the Tandy CoCo. You couldn't use a debugger to single-step through ROM, but I learned an awful lot about how the OS worked by copying it into RAM and stepping through. I was teaching myself Assembly Language at the time.
  • phil (unregistered) in reply to CoderHero
    CoderHero:
    The problem there is that it's almost impossible to measure. I know how to measure my time and what computing requirements cost.

    For Joe Sixpack who's unemployed, I can say his time is worth almost $0. For Bill HighflyingCEO, his time might be worth $100/hour.

    I don't know about you, but I prefer to take measurements of things I can measure, or reasonably estimate, rather than vague and fuzzy numbers.

    But if you want to sell your software to users, can you really reasonably ignore those "vague and fuzzy numbers"? And it's not how much the user's time is worth to them so much as how much the user's time is worth to you -- in other words, if they think your app is slow and bloated and see a better alternative, how much money are you losing compared to if you had built a better program and they were still doing business with you?

    Is all this kind of stuff uncertain, and impossible to precisely quantify? Certainly. But without taking uncertain things into accounts, your business is going to run into trouble the first time it's unlucky enough for something to go wrong (for instance, if your biggest client doesn't upgrade their systems for a while and the app gets so slow on their systems that they decide to find something else). Some people/companies get away with bad design for a very long time. That doesn't make it a sound business practice.

    (But, yes, definitely stick the RAM in the box at the start to get it working better while you do the bigger fix. A bandaid is better than a continual bleed! And a fix as drastic as Wilhelm's may have indeed been overkill. But it's usually not wise to just do the minimum fix necessary whenever a problem comes up -- especially when you know there are fairly serious underlying problems.)

  • (cs) in reply to SomeCoder
    SomeCoder:
    WizardStan:
    My recet application takes 500 megs to start. Why? No idea. It caches data records, but with 0 records, it uses up 500 megs of ram. Could I improve it? Sure, but how long will that take? 5 months with a team of 5 perhaps? Or should I just plunk in a $60 memory chip and leave it be? As said, it caches data records. Each record is about 50k in size. I could probably improve that as well, but should I? It currently handles 5'000 records. 750 megs of ram. We're expecting that to grow to 30'000 by 2010. 2 gigs of ram! And I'm out again. Guess I should have improved it when I had the chance. Or, because it's 2010, another 2 gigs of ram will only cost me $20 (or less, who knows?). DONE! More memory. Expectatios have us hitting 100'000 records by 2015. Uh oh: 5.5 gigs. No good now, hard limit is 4 gigs. Solution: Buy a new box. I'd bet dollars to doughnuts that 8 or 16 gigs of ram going into a box would be no problem by 2012 or so, at a cost of maybe a few thousand. Does any of that solve the underlying problem? No, memory requirement is still far higher than it probably could be, but a cost of a few (even ten or twenty) thousand for a new server is far cheaper than 2 man years of recoding effort.

    I wouldn't say that's a good excuse for poor code. I'd also say that your dollar estimates, even for 2015, are way too low. I don't think 2 GB of RAM is going to come down in price THAT much in 7 years. Currently, the price of 2 GB of RAM - at the cheapest place I know of - is about $120 which is a lot more than $20.

    Also, in the meantime, your users suffer because the server crashes a lot and when it doesn't crash, it takes forever to load pages. That's a large amount of lost revenue that you could be gathering but aren't because users can't see/click on ads and/or go to competing products.

    i just bought 2GB of DDR2-800 for $50

  • WizardStan (unregistered) in reply to SomeCoder
    SomeCoder:
    I wouldn't say that's a good excuse for poor code. I'd also say that your dollar estimates, even for 2015, are way too low. I don't think 2 GB of RAM is going to come down in price THAT much in 7 years. Currently, the price of 2 GB of RAM - at the cheapest place I know of - is about $120 which is a lot more than $20.

    Also, in the meantime, your users suffer because the server crashes a lot and when it doesn't crash, it takes forever to load pages. That's a large amount of lost revenue that you could be gathering but aren't because users can't see/click on ads and/or go to competing products.

    www.tigerdirect.ca/applications/SearchTools/item-details.asp?EdpNo=3404049&Sku=O261-8038 There, 4 gigs for $100. I just saved you $20 and doubled the amount you get. By Moore's law, which has held very steady and should continue to do so until about 2020 according to recent reports, the size and cost of the memory should half in about 18 months, so by 2010, that same 4 gigs should be $50. By mid 2011, it should be $25 for 4 gigs of ram, assuming past trends continue more or less as they have. But that's all moot anyway, because $50 or $120, it's still far less than even one day of effort to find and fix any large memory problems. And you speak of downtime and lost revenue due to reboots? Wilhelm's (and by extension your) solution to rewrite the code comes in at 5 months of daily crashes and reboots. The intern's (and by extension my own) solution of more memory comes down to one (maybe two) days of crashes and reboots to replace the ram. A week, perhaps, if there's a lot of paperwork to fill out and sign. Even a month, if it's a particularily troublesome company, is still less.
    I'll agree, rewriting was the right solution from a developer standpoint, but there are times in real world business where you have to step back and say it'll just cost too much to fix. This, of course, is all under the assumption that memory leaks were, at most, a small part of the problem, if there were any at all. If the problem really was that it was leaking memory like a seive, then the correct solution most definitely involves rewrites. But first, get some more ram into that sucker so it doesn't need to be restarted so often.

  • You're Right Sir (unregistered) in reply to Salami
    Salami:
    I'd agree with you if the server had a normal amount of memory (which would be at least 2 GB). The first thing to do is to bump up the memory and then see what that gained you. You don't spend 2 man-years on a problem without exhausting all easier options first.

    Indeed, if you get into a car accident and are bleeding internally, they should obviously start with aromatherapy candles in the Emergency Room. That's an easier option, we can assess in a few hours what that gained us.

    Saves on surgeon salaries, and a favorite of insurance companies everywhere.

  • anon (unregistered)

    Three pages already and I'm still wondering why no one has mentioned anything about the expected lifespan of the app.

    Fixing the memory leaks is still the best solution. But after that, even if the app takes up 1GB of production server memory at peak usage, 2.5GB of ram should be enough to handle the changes in the data volume for its entire lifespan (assuming this is your average enterprise app). No need to go through low level tweaking reserved for embedded devices.

  • Steve (unregistered)

    I have to tend to agree with Wilhelm, given the facts available in the piece.

    If there are correctable inefficiencies in the system, then adding memory just temporarily masks the problem.

    Furthermore, I see nothing in the story which says that Wilhelm has anything other than something sorely lacking in many organizations -- experience. I mean, it wasn't like his group were being forced to write the applications on punch cards and submit them to the batch mainframe via the hot card reader.

    So what if he's not a "fun devil"? As per my last set of exchanges, it strikes me as if perhaps too much emphasis is placed on happy smiling faces. Perhaps his lack of emotionality is simply cultural and no one ever took the trouble to actually get to know him. Some people are simply quiet by nature, which can be misinterpreted as taciturnity.

    Not every problem can be necessarily solved by throwing more hardware at it and even if it can be, it may be a situation where budgetary or other constraints may not allow such a solution (memory being sort of the low end of the spectrum). What if you've reached the limit of the hardware's upgradability, for instance, and replacing the system is not feasible.

    Admittedly, his rationalization, "...but no web application should ever require more than 500 megs", is a bit specious but his overall methodology seems sound.

    It sounds to me as if the programmers under his tutelage may have actually learned something valuable.

  • Stan (unregistered) in reply to FredSaw
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.

    Bah! to you too!

    My first personal computer had 3854 BYTES (yes, bytes) of RAM to program in. (Commodore VIC-20).

Leave a comment on “That Would've Been an Option Too”

Log In or post as a guest

Replying to comment #:

« Return to Article