• drone_6378 (unregistered) in reply to SomeCoder

    a 46% memory footprint reduction is not going to get you that far either. If you have to scale orders of magnitude, your best bet is to have code that can be made parallel -- and add lots of more machines. Yes, hardware can be an answer. If you style yourself an engineer, you use the best tool for the job. And that can be software, hardware, or both.

    Of course, I don't know the code. But if Wilhelms' crack team of developers only focused on footprint, chances are they didn't invest in highly modular code. You want to refactor modular code, not obscure hand-optimized, inlined code. Premature optimization is the root of all evil.

  • (cs) in reply to BentFranklin
    BentFranklin:
    snoofle:
    Regarding just getting it working the cheapest way possible...

    My brother is a CPA. He has a small office with ~10 employees. Each PC has it's own licensed copy of the varioius accounting programs, but all are mounted from the network server.

    Everything was running win-98. Over the years, each annual upgrade of the accounting software would suck up more and more system resources until the only way to do two things was to exit the current program and start the other one, and then switch back when done.

    That has to be Peachtree by SAGE.

    (Hooray for learning how to quote.)

    Yay! Big step!

    Now work on learning how to quote properly. Trim some of the excess text you're quoting before posting. It cuts down on the amount of cruft you have to wade through when reading. :-)

  • (cs)

    Hi there, I'm writing through a wormhole from Optimum BBS back in 1988, dial in on 629 969 0009, because we've got a thread going about some idiot who's just spent 5 months recoding the main application where I work on the VAX because it really should fit everything into 512kB of memory, shouldn't it?

    He was unrolling loops, inlining functions, hand writing assembly code and everything, until this intern called up the Digital Equipment Corporation maintenance contractor and got him to upgrade the memory, and everything ran fine. But this bloke said it needed to be rewritten anyway, because any minicomputer application should fit in 512kB.

    Anyway, this bloke started working on a PDP-8 with 6kB, and says he could do anything with that so why should he need any fancy compiled code.

    To give him his due, he's got the memory use down lots, and it's all very nicely structured, but it has taken masses of time.

    Anyway, it's all "plus ca change, plus c'est la meme chose", really. I don't think there's anything else I can add.

  • (cs) in reply to BadReferenceGuy
    BadReferenceGuy:
    Anonymously Yours:
    You can't build a castle in a swamp regardless of the time-to-market speed.
    Sure you can. And version 3.1 will be the most solid castle on the Internet (After versions 1 and 2 sink into the swamp, and version 3 burns down, falls over, and then sinks into the swamp.).

    Pontoons. All you need are pontoons.

  • (cs) in reply to Bappi
    Bappi:
    Fedaykin:
    Sorry, but the WTF would be just throwing more memory at the problem, because all that would do is make the time between OOM errors longer. It wouldn't solve the core problem of poor resource management in the system.

    Now, from TFA, I would guess there is a big chance that the effort was way overboard, but the core intent was correct: fix the source of the problem.

    Throwing more memory at the problem is only yet another duct tape and chicken wire solution.

    And what's the big problem with making the time between OOM errors longer? If it goes from once per day to once per week, on a production machine, that means fewer missed transactions, fewer lost sales, less time lost on reboots, and generally money saved or less income lost. It's a good idea to spend a trivial amount of money while you're working to resolve the root cause. Both William and Bob have it wrong. You need to put in Bob's solution to stop the bleeding for five months, while at the same time reworking the code to stop memory leaks/reduce footprint/improve code.

    For crying out loud, you could be going bankrupt while you're optimizing your code.

    But this is the fundamental cause of enterprise software degrading as it ages: everyone knows the best solution is to apply a bandage now to stop the profuse bleeding, and then stitch up the wound so it actually heals properly. The problem is that "not bleeding" is a sufficient enough solution that, unless proper documentation and planning are done, most companies forget about the wound. In the end, you get a big, ugly scar because you didn't bother to deal with the problem (a gaping wound), just the symptom (profuse bleeding). Sadly, long-term fixes get marginalized and abandoned way too often, simply because the band-aid is (falsely) considered "good enough."

  • troels (unregistered)
    Because of this approach the overall design of the system wasn't totally consistent — duplicate code, instance methods that should be static ...

    There is no such thing as instance methods that should be static.

  • anonymous (unregistered) in reply to anonymous

    Try 1K words of 12 bit magnetic core on a PDP-8e with an ASR-33 teletype with paper tape punch and reader.

    (Insert appropriate Monty Python reference here)

  • (cs) in reply to troels
    troels:
    Because of this approach the overall design of the system wasn't totally consistent — duplicate code, instance methods that should be static ...

    There is no such thing as instance methods that should be static.

    Ahh, but there are. In a number of poorly designed systems I've seen personally, there are methods within a class that are instance methods, although they perform no function that is specific to any instance. For instance (no pun intended), if you have a DateTime class that defines a "getCurrentDateTimeAsString()" method that retrieves the system's current date and time, you would be best off defining that as a static method because the system's date and time should be the same across all instances, thereby not require instantiation to be properly functional. But I've see numerous situations where, for some reason apparently known only to the developer, a method like that was defined as an instance method.

  • Survey User 2338 (unregistered)

    When all you have is a hammer everything is a nail...

  • Harrow (unregistered) in reply to anonymous
    anonymous:
    Try 1K words of 12 bit magnetic core on a PDP-8e with an ASR-33 teletype with paper tape punch and reader.
    Overkill. If you have a paper tape punch and reader, you don't need the ASR-33. The less expensive KSR-33 will suffice.

    -Harrow.

  • Another Jack (unregistered) in reply to anonymous
    anonymous:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.

    16k!?! I started with 3.5k on a VIC-20, thankyouverymuch.

    3.5k!?! We'd have crawled through hot coals to get 3.5k. We had 1k on our ZX81 and we were damn proud of it

  • Harrow (unregistered) in reply to Dave
    Dave:
    We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. -- Donald E. Knuth 1974

    We follow two rules in the matter of optimization: Rule 1. Don’t do it. Rule 2 (for experts only). Don’t do it yet – that is, not until you have a perfectly clear and unoptimized solution. -- M. A. Jackson 1975

    So, was Wilhelm guilty of "premature optimization" or was he first creating "a perfectly clear and unoptimized solution"? The article is not clear on this vital point, although it does lean in the direction of the correct solution (...analysis...prepared spreadsheets, charts, and graphs...identify trouble modules, unreleased resources, and unclosed connections...).

    Wilhelm's mistake was leading off his presentation with the ram footprint reduction. As so many commenters here have pointed out, spending resources to make a running program smaller or faster is rarely justified today, unless you're Google.

    He should have set aside his personal bias, and begun with a discussion of how rushing the development had led to crappy architecture decisions, inefficient design, resource leaks, etc. Then, when he finally mentioned less RAM or fewer CPU cycles, any remarks like Bob's would seem like unprofessional carping.

    So TRWTF was not letting a geek make business decisions (he didn't, really) but allowing a zealot to structure a business case presentation.

    -Harrow.

  • (cs) in reply to k
    k:
    I recall PEEK-ing at memory locations that didn't exist on the ZX81 - not all of it returned zero...

    The truth is out there....

    Those early ZX80/ZX81/Spectrum manuals (the big chunky spiral bound ones) were some of the best programming books I ever recall reading. But maybe 9-year olds are easy to impress :-) (yeah I am old)

    You are not alone .... I remember how incredibly happy I was when I got my first 16 KB from MultiTech for my '81.

    As for my $0.02 about the OP:

    It sounds as if the web app in question was to remain for a long time to come - let's say 4+ years. In that case the expense of re-factoring the implementation with five guys for five months (full-time ?) is justified because of that long period of time you will have a lot of ancillary benefits like (hopefully) a streamlined implementation, more effective use of resources, enhanced developer productivity and customer confidence in the product ... etc.

    Having said that, being me in Wilhelm's shoes in any case I would have tried to get an memory server upgrade up to the max (2GB or 4GB) as a stopgap measure.

  • (cs)

    I think there are two WTFs in this article.

    1. Lead programmer thinks an application should never use more than 512MB of RAM.
    2. An intern thinks optimization is dumb because it's easier in the short term to throw more hardware at it.

    In regards to #1, are they only running one server? With what kind of hardware? Are they insane?! My company has a cluster and theres good reason to have one.

    As for #2, I suppose he's never worked on a monolithic application, being an intern and all, and has no real idea of how bloody difficult it is to maintain one.

  • Fedaykin (unregistered) in reply to Edss
    Edss:
    Fedaykin:
    Sorry, but the WTF would be just throwing more memory at the problem...

    Throwing more memory at the problem is only yet another duct tape and chicken wire solution.

    So why not put the memory in so it at least runs better for the 5 months that all the developers are working on the optimisation.

    I agree with this. As I've said to others, my beef is the article implies that only the memory upgrade was necessary and not fixing the memory leaks.

  • (cs)

    Why would anyone take anything an intern says seriously? They usually have no clue as to the underlying details of a given problem.

    Just thinking out loud...

  • LEGO (unregistered) in reply to Erlingur
    Erlingur:
    Write-only memory? I really, really hope you're kidding...

    Captcha: nobis (seems quite fitting)

    Ever heard of paper?

  • (cs) in reply to Fedaykin
    Fedaykin:
    Sorry, but the WTF would be just throwing more memory at the problem, because all that would do is make the time between OOM errors longer. It wouldn't solve the core problem of poor resource management in the system.

    You are right, but the WTF! is maybe that Wilhelm allowed the problem to persist for all the time he was spending on the optimizations and totally ignored the fast solution to add just a little extra memory.

    In the end, your are right. Throwing more and more memory at the problem is not the right solution if there are -memory leaks-.

    However when a server is equipped with as little as 512MB, just adding an extra GB or 2 really is the first thing you do. 2GB really is nothing these days. Heck, even 4GB is peanuts. This will buy you some time and costs you very little. At the very least, with more memory you can set a convenient memory threshold for an alarm to go off, after which you still have enough time so you can reboot your app outside peak business hours.

    While that memory sits there you can analyze the cause of the problem. If there are leaks; fix them. If the code is invested with a sloppy resource usage pattern; step by step improve that.

    However, if the application really needs the memory just add the memory it needs. The second WTF! is that Wilhelm was so obsessed with the rule of thumb "web applications shouldn't use more than 512MB" that he also ignored that some web applications in fact do need more. Trying to squeeze all the data that the application really needs in 512MB is good when for some reason you work on a platform that is limited to a maximum of 512MB. This was more or less how things worked in the C64 era. The maximum was 64KB and there were (almost) no options of expanding that.

    However if upgrading your memory is a simple as installing an extra DIMM for a few bugs then you really should try that option first.

  • xtremezone (unregistered)

    (Doesn't have time to read through 7 @#$%ing pages)

    I have to agree with the re-engineering solution. Throwing more hardware at the problem would only be a temporary fix, if that. Eventually the leaking resources would likely still fill up the available memory or the application would grow and they'd face an exponentially bigger problem. So it was probably a good idea to re-engineer the application, but the fact that Wilhelm didn't know that it was a better idea than the hardware fix is still a WTF. ;D

    It's hard to say without more information (and experience :P).

  • (cs) in reply to SoonerMatt
    SoonerMatt:
    OK, so far it looks like we have a consensus. The real WTF is covering up undisciplined, problematic code with more hardware. Wilhelm's discipline and efficient use of resources should be commended.

    Well, there is a difference between fixing sloppy code that is too generously using resources and draconian measures to safe a few bytes.

    Stuff like encoding Booleans into an integer, stop using buffers and caches altogether, abundant usage of static methods etc are not really an improvement to the architecture of a system. Yes, it saves some memory here and there, but that kind of code is WAY harder to maintain.

    Like the article mentions, Wilhelm is from the C64 era. We could do wonders then with assembly, even doing crazy stuff like jumping between opcodes to squeeze every byte out of the system. -This- style of programming is not recommended anymore. Any trivial programming task will take at least 10 times longer (if not even longer than that) and maintenance in such a refactored system is harder, not easier.

    I think many people here confuse the idea of refactoring sloppy code (which is always a good practice) with applying draconian 8 bit practices from the 80-ties to modern 32 or 64 bit computing platforms.

  • Craig Landrum (unregistered) in reply to anonymous

    64KB? 16KB? 3.5KB?

    Wimps. If you couldn't make it fit in the 256 bytes that came with the Altair, you weren't trying very hard.

    IMSAI's "Kill the Lights" game fit in waaaay less than 256 bytes. My kids love that game.

    Eventually of course, everything will get optimized down to one byte containing the number 42. :-)

  • (cs) in reply to Bernie
    Bernie:
    Also, there is a thing called Time-Space Trade Off. I'm sure you've heard of it.

    In all, it is possible that the app is smaller but a lot slower.

    Indeed, that's another major problem when applying draconian 8-bit style programming paradigms on a modern code base. The article doesn't mention it explicitly, but taking (extreme) measures to save a few bytes will simply cost you in time. If you save those bytes by encoding Booleans in an integer, it will cost you in CPU cycles when your code needs to mask out the correct bit. If you disable caching you will safe even more memory, but it will cost you in time when your code needs to make a round trip to the database, etc.

    Again, a lot of the people commenting assume that the original code was overly sloppy and leaking resources.

    Now just suppose that this wasn't the case, or at least not that extreme (no software is perfect after all). Suppose the app was relatively well written and simply needed a certain amount of data to be in memory.

    Since Wilhelm obviously didn't invent the new perfect lossless compression algorithm, the server will eventually need more memory anyway if the business grows. Wilhelms draconian coding efforts just postponed this situation.

    Maybe the REAL WTF! is that the article wasn't more clear at what the exact underlying problem was ;)

  • ikdind (unregistered) in reply to Rumblefish
    Rumblefish :
    For instance (no pun intended), if you have a DateTime class that defines a "getCurrentDateTimeAsString()" method that retrieves the system's current date and time, you would be best off defining that as a static method because the system's date and time should be the same across all instances, thereby not require instantiation to be properly functional.
    Humble opinion: TRWTF there is wrapping around OS functionality like that within an application-defined object.

    For that matter, why are you implementing your own DateTime?

  • (cs) in reply to webhamster
    webhamster:
    Yeah, I was being sarcastic a tad. But still consider that they went to the MOON with 72K of memory and we're talking about a web app to sell widgets being underpowered at 2GB.
    Gooooood point.

    Just about as gooooood a point as anything else so far.

    I loved watching the Moon landing when I was a kid. I even loved playing with my dad's Fortran program to simulate it on a George III machine. It was great.

    But it wasn't exactly a server system, was it?

    Hell, I've written a bisync (technically ALC) driver in 8K of code that would support either the master or the slave side for around a hundred terminals. (The trick was that I used the code segment for the master side as the data segment for the slave side, and vice versa.)

    But so what?

    The only thing I've learned so far from this thread, and I haven't even learned it because I already knew it, is that you should never let a Web developer near a high-performance server system. In 95% of cases, they will just screw it up. Even after ten years in the wild, this is still a random collection of half-baked and non-orthogonal technologies, compounded by understandable but not forgivable ignorance of the underlying problems of scalability, latency, and resource consumption.

    We're stuck with Web crap. The bubble has burst, a long time ago, but the so-called technology is still there.

    It was the wrong way to go ten years ago, but unfortunately we now have ten years of investment in this drivel, and we're stuck with it. And with the hordes of idiots who claim they can produce a working system by using it.

    I'd call that the biggest lost opportunity in history since I met Marilyn Monroe and tried to chat her up by telling her that "Norma Jean" is a really sexy name. And I wasn't even born at the time.

  • (cs) in reply to Rumblefish
    Rumblefish:
    troels:
    Because of this approach the overall design of the system wasn't totally consistent — duplicate code, instance methods that should be static ...

    There is no such thing as instance methods that should be static.

    Ahh, but there are. In a number of poorly designed systems I've seen personally, there are methods within a class that are instance methods, although they perform no function that is specific to any instance. For instance (no pun intended), if you have a DateTime class that defines a "getCurrentDateTimeAsString()" method that retrieves the system's current date and time, you would be best off defining that as a static method because the system's date and time should be the same across all instances, thereby not require instantiation to be properly functional. But I've see numerous situations where, for some reason apparently known only to the developer, a method like that was defined as an instance method.

    Why are you even bothering to debate this?

    The man has Burger Flipper tatooed on his forehead.

    Leave him to his meat patties. Don't taunt him. That's not nice, and it might result in a nasty burger flame war...

  • Dan (unregistered) in reply to anonymous

    3.5K!?! I started with 2K on a Sinclair!

  • pedant (unregistered) in reply to anonymous

    3.5k!?! I started with 1k on a zx81.

  • Mr (unregistered) in reply to Tim
    Tim:
    Amazing how we geeks boast about how small we can make things. Is this some sort of reverse-phallic-dilemma? Would Freud have a field day with us or what...

    T-

    Perhaps it's to compensate for our huge brains?

  • Mr (unregistered) in reply to Edss
    Edss:
    But I am a little concerned by the, "2GB of memory for fifty, sixty bucks", in a day when a production web server has 512MB. Doesn't quite make much sense.

    It was probably an old server. Businesses usually don't like to buy new servers unless they absolutely have to.

  • Mr (unregistered) in reply to xtremezone
    xtremezone:
    Throwing more hardware at the problem would only be a temporary fix, if that. Eventually the leaking resources would likely still fill up the available memory
    Why does everyone think that it was caused by memory leaks? Perhaps the application just needed more memory for the tasks. If there were memory leaks, I think Wilhelm would have said that to the intern.
    xtremezone:
    or the application would grow and they'd face an exponentially bigger problem.
    Remember that the old version only used twice the memory. That means that when the old version needs 4GB, the new one still needs 2GB. Then it's probably about time to add a second server anyway...
  • (cs) in reply to WizardStan
    SomeCoder:
    And you speak of downtime and lost revenue due to reboots? Wilhelm's (and by extension your) solution to rewrite the code comes in at 5 months of daily crashes and reboots. The intern's (and by extension my own) solution of more memory comes down to one (maybe two) days of crashes and reboots to replace the ram. [...] But first, get some more ram into that sucker so it doesn't need to be restarted so often.

    That's the core of this WTF!. Although from reading the article we don't really learn whether it was a sensible rewrite or a maniacal and draconian exercise to fit more stuff in a tight space, at any length adding more RAM to the system would have probably mitigated the problems during these 5 months.

    As I mentioned before, it would have at least given the people operating the server more time. If the system was operating with 512MB then, even if it was leaking memory, it would most likely not leak memory any faster when equipped with say 4GB of memory. So, 4GB of memory could just be used as a safety buffer. If you set an alarm to go off when memory usage reaches 512MB, then you still have about 7~8 times your average up time to reboot the server. In practice this would mean you can probably reboot the server at ease somewhere in the weekend in the middle of the night or so.

    Wilhelm might have been correct to address any possible underlying issues, but he failed when he considered his solution the ONLY solution. Why not add RAM AND fix the underlying problem?

  • (cs) in reply to Mr
    Mr:
    Why does everyone think that it was caused by memory leaks? Perhaps the application just needed more memory for the tasks. If there were memory leaks, I think Wilhelm would have said that to the intern.

    Indeed. "That would've been an option too" were Wilhelm's own words, not those of a PHB. Wilhelm also specifically remarked that his reasoning was that "no web application should ever require more than 500 megs".

    Now does that sound as if memory leaks were at the core of the problem? Not really. If it was, Wilhelm would have been quick to say that adding more memory was only a senseless temporary solution. But he didn't say that. He couldn't say that if it wasn't the issue. Not when being in a meeting with other developers (he was about to discuss the 'architectural changes', you wouldn't explain these to a PHB).

    Also read this:

    Changes had a tendency to delay development, as some parameters were being set too conservatively, breaking something else down the line.

    That doesn't sound like solving memory leaks, does it? This sounds like minimizing memory buffers up to the point where it influences the stability of the application...

  • Adam (unregistered)

    Even from a business perspective it can often be the right decision to spend the time optimizing. For example if you have a lot of customers and your product is not that expensive it can be too much to ask of your customers to upgrade their hardware. Things like memory usage can make the difference between making the sale and not. Just because so many people these days work on single instance environments where it is virtually always the best option to upgrade the hardware does mean that everybody does. It certainly does not make a five month optimization effort that likely turned out a slim pleasure to use product a WTF.

  • Bob (unregistered) in reply to anonymous
    anonymous:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.

    16k!?! I started with 3.5k on a VIC-20, thankyouverymuch.

    Feh. 128 BYTES on an PDP-3.

  • (cs) in reply to Dave G.
    Dave G.:
    It's obvious that most people here have no business training whatsoever.

    5 months (someone estimated this at $200,000) to fix the problem, compared to buying a RAM module for $60? How is this even a discussion? How can you possibly argue that the extra $199,940 that was spent was worth it? Hell you might as well rewrite the app from scratch.

    I've been programming my whole life and while I love the idea of optimising the hell out of something and coming away with a positive result (50% improvement is very impressive), the business side of me could never, ever justify such a huge disparity in time and cost.

    I know the extra RAM doesn't actually solve the problem and it's a temporary fix. As such, while the code is being maintained through its normal course, small periods of refactoring / optimisation should be conducted to incrementally improve the application.

    But for gods sake, blocking off 5 months of time to work on something like this is financial suicide. For a public company, it would be borderline criminally negligent to waste so much money so cheaply.

    What if it took 12 months to fix this? That's $480,000. How about 2 years for almost a million dollars? When do you say "ok, that $60 RAM bank is looking pretty good now"?

    Hint: the correct answer is not "we will never say that, we should spend all the time necessary to fix the application, no matter what the cost is". If you disagree with this statement, then I'd advise you never to start your own business, because you will be bankrupt within a year.

    Get some perspective please my fellow geeks. I know it's cool to hate on "business decisions", and "managers" but this one isn't close. This time, the geeks have it wrong. Trust me.

    Quoted for sanity.

    I don't know how anybody could possibly justify that the "fix the software" route was right, given the facts of the story as presented.

    Firstly, stuffing 2Gb of RAM in the server may or may not have solved the problem (we don't know there was an actual memory leak). If it didn't solve the leak, what have we lost? Sixty dollars, according to the original story. That's certainly less than the cost of the first meeting the five developers had to discuss their strategy. Get some perspective, $60 is small change.

    Let's say the $60 works in the short term, but as time goes by, things slow down and after another six months, they have to add another $60 worth of RAM. No, let's say that they have to throw $10K of servers at the problem every six months. That means the solution has to be running for ten years before the cost of the "throw hardware at it" approach exceeds the cost of the fix the problem approach on the assumption that the fix it approach costs $200K.

    As described in the story, the approach taken was complete insanity.

    Of course, all of the above is completely false if the problem really was a memory leak. But then you have to fire all of the developers for taking 30 man months just to find and fix a memory leak.

  • (cs) in reply to Jake Clarson
    Jake Clarson:
    SomeCoder:

    ... SNIP .....

    But first, get some more ram into that sucker so it doesn't need to be restarted so often.

    .... SNIP ....

    Wilhelm might have been correct to address any possible underlying issues, but he failed when he considered his solution the ONLY solution. Why not add RAM AND fix the underlying problem?

    (1) Why would that have been Wilhelm's job in the first place - he is in development, not in data center operations ? Operations should have taken the memory approach all by themselves.

    (2) Previous posters already named long purchase processes and/or related budget issues as reasons for lacking memory upgrade implementations. This can be an issue - especially in a government organization.

    (3) It is very obvious to me that Wilhelm had management support for his 5 guys/5 months re-factoring project. You don't get 25 man months all by your own self without management buy-in for internal re-factoring. The reasons for the management support remain open to speculation but there it must have been there.

    (4) In all likelihood Wilhelm's Team did more than just re-factoring. No project team working on re-factoring is ever left in peace by scope-creep. Even if this was a government organization, there must have been CYOA reason in the project requirement to allocate the resources. It's 2008, people - nobody is allocating 25 months for internal re-factoring of a widget web app. I can fairly see the "leveraging implementation synergy" catch phrases in the project proposal ... ;-)

  • Aredhel (unregistered) in reply to Rumblefish

    In a number of poorly designed systems I've seen personally, there are methods within a class that are instance methods, although they perform no function that is specific to any instance. For instance (no pun intended), if you have a DateTime class that defines a "getCurrentDateTimeAsString()" method that retrieves the system's current date and time, you would be best off defining that as a static method because the system's date and time should be the same across all instances, thereby not require instantiation to be properly functional. But I've see numerous situations where, for some reason apparently known only to the developer, a method like that was defined as an instance method.[/quote]

    Well, you can define "getCurrentDateTimeAsString()" as instance method to simplify unit testing. Otherwise you would have hard time mocking the method as far as I know.

  • Aredhel (unregistered) in reply to Aredhel
    Aredhel:
    In a number of poorly designed systems I've seen personally, there are methods within a class that are instance methods, although they perform no function that is specific to any instance. For instance (no pun intended), if you have a DateTime class that defines a "getCurrentDateTimeAsString()" method that retrieves the system's current date and time, you would be best off defining that as a static method because the system's date and time should be the same across all instances, thereby not require instantiation to be properly functional. But I've see numerous situations where, for some reason apparently known only to the developer, a method like that was defined as an instance method.

    Well, you can define "getCurrentDateTimeAsString()" as instance method to simplify unit testing. Otherwise you would have hard time mocking the method as far as I know.

    Oops... I am sorry for deleting too much BBCode and missing the preview button :(

  • (cs) in reply to cklam
    cklam:
    Jake Clarson:
    Why not add RAM AND fix the underlying problem?

    (1) Why would that have been Wilhelm's job in the first place - he is in development, not in data center operations ? Operations should have taken the memory approach all by themselves.

    Not all companies have a separate data center operations. Most companies I know that basically operate a web application have their programmers also oversee the configuration of the server on which their web application is running. At the very least they advise a system administrator about the hardware that's needed.

    Wilhelm's first task would have been to advise who ever is in charge of the actual hardware to install that extra dirt cheap memory module.

    Wilhelm either neglected to do that, because of his stubborn believes that no web application should run in more than 500 MB, OR... Wilhelm tried to solve the problem within the realm where he has his power instead of communicating with "those other guys".

    This latter option would be an all different WTF! You see this more often: Front-end programmers that try to solve a problem in Javascript/html while it should obviously be done at the server side in Java. Java programmers that try to fix a problem in Java code, while it obviously should be done in SQL and to come full circle, SQL coders that try to fix a problem in SQL while it very obviously should be done at the front-end presentation tier (yes, this sounds weird but it does happen).

    My guess however is with the first option. Wilhelm falsely assumed that 512MB was a fixed limit and just completely ignored the idea that servers can be upgraded.

    Believe me, it's a programmer that send in the original story. No PHB is going to spend his time writing something like this up and sending it to thedailywtf. Just because the article appeared here I believe the WTF! is what it seems: Wilhelm took 5 months to fix something in software which also could have been done with a simple memory upgrade. If it was indeed a memory leak then Wilhelm was partially right (although he still should have installed that extra 2GB first and then started the fix). Either way, there is a WTF! in what Willem did.

  • adee (unregistered) in reply to anonymous

    losers! 128 bytes on Atari 2600

  • Programmer X (unregistered)

    I would guess this was a C# application written by people who had never heard of the 'using' statement, or a java app where great deal of resources that should have been closed and cleaned up were actually never closed but just 'leaked' to the garbage collector.

    Makes one wonder how much of the problem would have been solved just by explicitly invoking the garbage collector at a few select locations.

    TRWTF are the comments, as usual.

  • (cs) in reply to Aredhel
    Aredhel:
    Well, you can define "getCurrentDateTimeAsString()" as instance method to simplify unit testing. Otherwise you would have hard time mocking the method as far as I know.

    You would only have a hard time mocking the function if you didn't know what you were doing.

    As a perl programmer, mocking functions is particularly easy. However, even in other languages I've worked in, there's generally a way. For example, for many compiled languages, there's the LD_PRELOAD environment variable.

    Not to mention: why are you mocking it? Why not simply have your unit test factor in the concept that time changes? After all, without that, it's not exactly a valid test: you know your software did work once upon a time, but you don't know if it still works. (Yes, I know that this refutation is specific to time - that's why I answered the question before mocking it.)

  • nevyn (unregistered) in reply to troels
    troels:
    Because of this approach the overall design of the system wasn't totally consistent — duplicate code, instance methods that should be static ...

    There is no such thing as instance methods that should be static.

    Oh, so you're a Java programmer, then?

  • (cs) in reply to LEGO
    LEGO:
    Erlingur:
    Write-only memory? I really, really hope you're kidding...

    Captcha: nobis (seems quite fitting)

    Ever heard of paper?

    Ever heard of literacy?

  • (cs) in reply to ambrosen
    ambrosen:
    Hi there, I'm writing through a wormhole from Optimum BBS back in 1988, dial in on 629 969 0009, because we've got a thread going about some idiot who's just spent 5 months recoding the main application where I work on the VAX because it really should fit everything into 512kB of memory, shouldn't it?

    He was unrolling loops, inlining functions, hand writing assembly code and everything, until this intern called up the Digital Equipment Corporation maintenance contractor and got him to upgrade the memory, and everything ran fine. But this bloke said it needed to be rewritten anyway, because any minicomputer application should fit in 512kB.

    Anyway, this bloke started working on a PDP-8 with 6kB, and says he could do anything with that so why should he need any fancy compiled code.

    To give him his due, he's got the memory use down lots, and it's all very nicely structured, but it has taken masses of time.

    Anyway, it's all "plus ca change, plus c'est la meme chose", really. I don't think there's anything else I can add.

    Well, hi back 'atcha from the twenty-first century! We've had some incredible social and technological developments since your day, there's so much to tell you about, but I guess the thing that's going to amaze you most is that these days we've discovered that unrolling loops and inlining functions are actually lousy ways to reduce code space, tending, as they do, to have entirely the opposite effect...

  • Nat (unregistered) in reply to Anonymous

    It was the correct way to do it from a University Computer Science department standpoint, but businesses don't operate that way.

    I just finished a CS program, and my classes covered software optimization two or three times. The first optimization step is always "see if buying faster/bigger hardware will make the problem go away".

    Even once you actually get into changing the program for optimization, you look for asymptotically better algorithms, not for places to save a couple meg of RAM.

  • Michael (unregistered) in reply to anonymous
    anonymous:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.

    16k!?! I started with 3.5k on a VIC-20, thankyouverymuch.

    Try 256 BYTES of RAM (2KB of ROM) on for size sometime. (HC05K)

  • Me (unregistered) in reply to SomeCoder
    SomeCoder:
    Yes, I completely agree with what has been said here. Wilhelm was completely correct in doing what he did. Yes, he was maybe a bit hard nosed about things but I tend to agree with him on one point: programmers are lazy asses these days.

    Why is it that hardware gets faster but our programs actually run slower? Because programmers are lazy: "Hey, we've got 4 GB of RAM now, so I don't really need to be careful with memory."

    This is, to me, completely unacceptable. It is just as unacceptable as saying "Well, the application runs out of memory constantly so the fix is to buy more RAM." That is asinine and doesn't solve the real problem.

    If the app was already designed well and was running out of memory because your server only had 512 MB, that's one thing. Since the article specifically states that that is not the case, the real WTF is that Wilhelm had to be embarrassed for implementing changes that reduced the memory footprint by 50% (a nice feat in and of itself).

    He should have just screamed and jumped out of the window. You never miss a Wilhelm scream :)

    +1^100000000000000000000000000000000000000000

  • Steve (unregistered) in reply to Me
    +1^100000000000000000000000000000000000000000
    So ... +1?
  • Andy (unregistered) in reply to anonymous

    Actually VICs had 4K of RAM. If you chose to use uppper 512 bytes for display then that was really up to you ;-)

Leave a comment on “That Would've Been an Option Too”

Log In or post as a guest

Replying to comment #:

« Return to Article