• Kuba (unregistered) in reply to Harrow
    Harrow:
    Dave:
    We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. -- Donald E. Knuth 1974

    We follow two rules in the matter of optimization: Rule 1. Don’t do it. Rule 2 (for experts only). Don’t do it yet – that is, not until you have a perfectly clear and unoptimized solution. -- M. A. Jackson 1975

    So, was Wilhelm guilty of "premature optimization" or was he first creating "a perfectly clear and unoptimized solution"?

    While everyone is quoting Knuth and Jackson on their optimization insights, we hopefully all agree that premature pessimization should be avoided too.

    Basically there needs to be a "golden middle", and it may even evolve (duh!) over time, as more gets known about application's real-world performance etc...

  • (cs) in reply to FredSaw
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.
    Guess I was 1/4 of a real programmer back in jr. high. Anyone remember the TRS-80 Color Computer 1 with 4k?
  • Rich (unregistered) in reply to anonymous

    Timex Sinclair 1000 - 1KB of ram.

  • Kuba (unregistered) in reply to Nat
    Nat:
    Even once you actually get into changing the program for optimization, you look for asymptotically better algorithms, not for places to save a couple meg of RAM.

    That's true, but a big problem is that of the architecture which pessimizes early and pessimizes often. In case of bad architectural decisions, the underlying algorithms are probably OK, but everything is wrapped in so much overhead that even O(1) stuff has impact, since you multiply it by number of layers.

    So, one has an application where a request goes (gets marshalled etc) through 5 layers of this-or-that-ware. While each layer may be individually pretty well tuned, you end up with something that still won't work on a 32-bit JVM. Each layer may only take say 256-512MB of Java heap, but with all of them present and active you get something that doesn't fit in 32 bits... That often is the case, and "we shall use xyz-ware" comes from higher-ups who need a cluebat at best.

  • Finibus Bonorum (unregistered) in reply to silent d
    silent d:
    Ozz:
    anonymous:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.
    16k!?! I started with 3.5k on a VIC-20, thankyouverymuch.
    I started with 1k on a Sinclair ZX81 back in '81. Though, I did have some fun in college in the mid 80s with a 6502-based EMMA (25-key keypad and a 7-segment display - it was programmed in raw hex) but I can't remember how much RAM it had.

    1k? You were lucky. When I started we only had a 16-bit abacus, and when that broke down we had to go back to scratching marks on clay tablets, outside in the pouring rain, walking uphill both ways and ... oh, nevermind.


    [CAPTCHA: saluto] Quite apropos

    So, saluto!

    You had an abacus? And a 16-bit version, at that? Lucky you!

    In my day [the late Creteaceous Era], since our 1-bead abacus was still in prototype stage, we had to use finger calculators. Initially, we used binary code http://en.wikipedia.org/wiki/Finger_binary. Then, fortunately, a few eons later [in the early Tertiary Era], we received a software upgrade to Base 10 http://www.cs.iupui.edu/~aharris/chis/chis.html.

    However, in both cases, our right foot served as a buffer and the left foot was reserved for buffer overflows & TSR ["terminate and stay resident"] memory recycler/garbage collector processes. And when the system locked up, our belly-buttons were used as re-set buttons.

  • Finibus Bonorum (unregistered) in reply to Master TMO
    Master TMO:
    I think the WTF on Wilhelm's side is that he didn't even consider the possibility of the hardware fix. As the story is told, he was so stuck in the past that 'no webserver should need more than 512k', even though webservers can easily have GBs today.

    His team gets kudos for actually fixing all those problems though. In my experience, being allowed to do that almost never happens. Kludge and band-aid until the whole system is practically a mummy, and hopefully by then there will be a new application available that can completely replace the doddering heap that is left.


    [CAPTCHA: secundum] - I got nothing...

    Instead of "512k", try "...but no web application should ever require more than 500 megs.".

  • monzo (unregistered) in reply to Rumblefish

    start with 2000 of them and let experience teach you some more tricks...

    I actually agree with the rewrite (despite the many man hours) because it is a core application.

    The investment reasons are a bit off, but otherwise...

  • tonecoke (unregistered) in reply to anonymous
    anonymous:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.

    16k!?! I started with 3.5k on a VIC-20, thankyouverymuch.

    3.5k? I started with 1k on the ZX-81...

  • rainer (unregistered) in reply to Karl V
    Karl V:
    We had a situation a while back with a report that kept throwing Out Of Memory exceptions as well. (snip) He was using static arrays... five to be exact... of long... set to 20000000. He was "Planning for future growth".(At that time our largest client had about 20000 records for that report)

    Any decent OS uses deferred allocation, therefore actual resource usage should still be almost zero. Can't see where the OOM should come from.

  • Kriegel (unregistered) in reply to Rumblefish

    There are reasons for this, mainly: providing seams to make the beast testable. Object seams and object creation is a neat way of swapping one implementation with another.

    Go, read Michael Feathers book: "Working Effectively with Legacy Code". He says: "Code without tests is legacy code."

    But who needs tests? Automated, relyable and repeatable tests?

    Besides, controlling the "system time" during such a test and especially setting current time millis to a specific value can come handy.

    And yes, there are link-time seams (swapping one .o or .jar with another) but they are more cumbersome.

  • hms (unregistered) in reply to Rumblefish

    You may be right with regards to the semantics of objects/classes' methods in questions, but you are wrong with regards to memory/cpu footprint.

    Modern languages treat OOP objects' methods in so far as that in the executable, a pointer to the object's structure is passed (i.e. a pointer to the first member of an object). The compiler knows where the next and the next-next members and stuff like a possible vtable reside and translates any code referring to them as memory offsets in cpu instructions.

    And it's the same for static methods. If there are static members to a class, the compiler will do some offset-play for you. Yet, there is still some class-object pointer passed as the first parameter of the method you called.

    The only difference is that static objects are not destroyed like auto-objects on the stack and kept in memory. The only truly meaningful scenario might be when an exception is thrown and stack-unwinding proceeds to delete a million pages due to a billion objects on the stack because it must go so far down the stack - neglectible by the way, that if that happens, you will have encountered a totally unexpected error and you will have programmed your application to exit gracefully at that point - so who cares?.

    In sum, struct Car { static int foo; static int getFoo (void); } is different to struct Car { int foo; int getFoo (void); } only in so far:

    • Case 1 will be 4b created on the heap and a method getFoo() on the heap.
    • Case 2 will be 4b created on the heap or stack that will possibly be deleted again and a method getFoo() on the heap.

    Note that 4 bytes are far less than a page in ANY relevant operating system (4k), so it's likely that deallocation of the memory used will be a effort to get rid of many more such objects.

    People need to get over the notion that object oriented programming and objects are something bloated. Sure, you can make it bloated. But that's solely because people don't know jack about programming. It's not because an integer is suddenly 4kb.

    HMS

  • Wilhelm (unregistered) in reply to Anonymous Cow-Herd
    Bruteforce:
    the real fix wont happen as long as the system works reasonably well.

    Check the title of the article: the great Wilhelm himself granted that spending $50 and half an hour would have been an option too, compared to employing a team of programmers for 5 months.

    Plenty of out-of-touch-with-reality nerds going crazy in this thread. Nerds that would be unemployed pretty quickly, having destroyed the companies they worked for, if they had a say in business decisions.

  • Wilhelm (unregistered) in reply to Finibus Bonorum
    Finibus Bonorum:
    [CAPTCHA: secundum] - I got nothing...

    Luckily, nobody cares.

  • anonymous joker (unregistered) in reply to anonymous

    Using memory! bah it will never catch on!

  • (cs) in reply to dkf
    dkf:
    Nate:
    if this was a high demand app that was running on any descent server, 2GB of ECC ram for those sort of servers, is not $56, its more likely to cost over $1500
    Wow! You're paying far too much for your ECC RAM. Who are you buying it from? Sun? Apple? HP? Someone else with equally inflated prices? A quick google should find much better prices than that.

    Does your CIO (or other manager) allow you to buy RAM at Walmart? Don't you have f.e. Compaq which needs "server" RAM? OK, $1500 is a bit too much but I'm sure you won't get anything certified for 100 bucks.

  • squigs (unregistered) in reply to Rumblefish

    But what difference does it make? You're passing an extra pointer to the instance, but it shouldn't affect memory usage or speed, or reliability enough to worry about.

  • (cs) in reply to Sam
    Sam:
    "Wilhelm was given four developers to help with the task."

    None of whom asked the same question.

    They just cared for their pay-check. The "Who cares?" people wouldn't care for more. And who would ask programmers, anyway? Since they are stupid and have no clue about how business works this is not even an option.

  • (cs) in reply to DangerMouse9
    DangerMouse9:
    FredSaw:
    He had the skillset as far as he was concerned, since he cut his teeth on Commodore 64 development where he had 64KB of memory
    Bah! Real programmers started with 16K.

    Lyle could've done it with 4K.

    I know Lyle. He would do it with 1 bit represented by his finger being horizontal or vertical. 4k is for poofties.

  • (cs) in reply to LEGO
    LEGO:
    Erlingur:
    Write-only memory? I really, really hope you're kidding...

    Captcha: nobis (seems quite fitting)

    Ever heard of paper?

    Paper is Write-Once-Read-Many (WORM)

  • (cs) in reply to Jake Clarson
    Jake Clarson:
    However if upgrading your memory is a simple as installing an extra DIMM for a few bugs then you really should try that option first.

    That is called sabotage.

  • Esben (unregistered) in reply to anonymous

    Reverse penile measuring, mine is smaller than yours ? i began with nothing, opend my eyes one morning and thourght "Hello world"

  • Joey (unregistered) in reply to Wilhelm

    I'd do both. Buy more RAM but also spend at least some time improving the memory management of the program.

    What's worse than that he just choose optimisation rather than better hardware is that nothing mentions space complexity or whether or not the increase in memory efficiency affected time complexity adversely or not.

    If the comparitive space complexity is simply O(n)=n*0.46 then not getting a ram increase and only improving memory management a little is stupid.

    The complete lack of mention of any space complexity is what makes this guy look incompetant. Any idiot can for example pack eight boleans into a byte rather than one per byte.

  • BigMouth (unregistered)

    The only problem with it: optimization must begin earlier than that.

    Memory footprint reduced to 46% means: nobody cares about performance until it's too late.

    "Hardware is cheap" is no excuse for crappy software.

  • Carter (unregistered) in reply to jtl

    Very few people are willing to PAY for beautiful code.

  • anon nona (unregistered) in reply to anonymous

    3.5k? Decadence! I started writing my code out longhand on a legal pad with a felt tip pen.

  • sacundim (unregistered) in reply to Rumblefish

    That is true. People write methods like that all the time.

    However, the implicit point here, that this makes your code slower, only holds in certain languages (e.g., C++), and even then only in certain situations (e.g., calling some such methods in an inner loop).

    However, some language implementations will detect when one such call is a performance hotspot, analyze the graph of classes loaded into the system to figure out that there is only one method that the call may be dispatched to, and inline the instance method into the relevant callers. Java Hotspot does this; therefore, that kind of method is unlikely to be a performance bottleneck in a Java program.

  • Nev (unregistered) in reply to Rumblefish

    His point is probably that you can't then mock out that function and replace it with some other implementation for unit testing...

  • Johannes (unregistered)

    To all the oldtimers here: 16K is for wimps. I have to do complicated routines with 1K TODAY.

    Atmel AVR, grampas!

  • Freyaday (unregistered) in reply to Anonymous
    Anonymous:
    The computer had 512MB of ram. What production computer has 512MB?
    The Xbox 360.
  • 🤷 (unregistered)

    TRWTF is that some developers think that "throw more hardware at it until it runs" is the solution for everything. They had crappy code, tons of unclosed connections, but hey "just install more RAM, duh!"

    Sorry, but this kind of behavior is the exact reason sites like this exist in the first place: Because crappy programers write crappy code and then just throw more hardware at it. At my old job we constantly had to deal with the SQL server having to be rebooted, because my predecessors didn't care about closing connections either. There are only so many connections in the connection pool, though. As soon as I mentioned the word "connection pool" I could see that no one else had ever heard about this.

  • DarrickPiero (unregistered)

    Hello Guys, Glad to Join! :)

Leave a comment on “That Would've Been an Option Too”

Log In or post as a guest

Replying to comment #:

« Return to Article