- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
While everyone is quoting Knuth and Jackson on their optimization insights, we hopefully all agree that premature pessimization should be avoided too.
Basically there needs to be a "golden middle", and it may even evolve (duh!) over time, as more gets known about application's real-world performance etc...
Admin
Admin
Timex Sinclair 1000 - 1KB of ram.
Admin
That's true, but a big problem is that of the architecture which pessimizes early and pessimizes often. In case of bad architectural decisions, the underlying algorithms are probably OK, but everything is wrapped in so much overhead that even O(1) stuff has impact, since you multiply it by number of layers.
So, one has an application where a request goes (gets marshalled etc) through 5 layers of this-or-that-ware. While each layer may be individually pretty well tuned, you end up with something that still won't work on a 32-bit JVM. Each layer may only take say 256-512MB of Java heap, but with all of them present and active you get something that doesn't fit in 32 bits... That often is the case, and "we shall use xyz-ware" comes from higher-ups who need a cluebat at best.
Admin
[CAPTCHA: saluto] Quite apropos
So, saluto!
You had an abacus? And a 16-bit version, at that? Lucky you!
In my day [the late Creteaceous Era], since our 1-bead abacus was still in prototype stage, we had to use finger calculators. Initially, we used binary code http://en.wikipedia.org/wiki/Finger_binary. Then, fortunately, a few eons later [in the early Tertiary Era], we received a software upgrade to Base 10 http://www.cs.iupui.edu/~aharris/chis/chis.html.
However, in both cases, our right foot served as a buffer and the left foot was reserved for buffer overflows & TSR ["terminate and stay resident"] memory recycler/garbage collector processes. And when the system locked up, our belly-buttons were used as re-set buttons.
Admin
[CAPTCHA: secundum] - I got nothing...
Instead of "512k", try "...but no web application should ever require more than 500 megs.".
Admin
start with 2000 of them and let experience teach you some more tricks...
I actually agree with the rewrite (despite the many man hours) because it is a core application.
The investment reasons are a bit off, but otherwise...
Admin
3.5k? I started with 1k on the ZX-81...
Admin
Any decent OS uses deferred allocation, therefore actual resource usage should still be almost zero. Can't see where the OOM should come from.
Admin
There are reasons for this, mainly: providing seams to make the beast testable. Object seams and object creation is a neat way of swapping one implementation with another.
Go, read Michael Feathers book: "Working Effectively with Legacy Code". He says: "Code without tests is legacy code."
But who needs tests? Automated, relyable and repeatable tests?
Besides, controlling the "system time" during such a test and especially setting current time millis to a specific value can come handy.
And yes, there are link-time seams (swapping one .o or .jar with another) but they are more cumbersome.
Admin
You may be right with regards to the semantics of objects/classes' methods in questions, but you are wrong with regards to memory/cpu footprint.
Modern languages treat OOP objects' methods in so far as that in the executable, a pointer to the object's structure is passed (i.e. a pointer to the first member of an object). The compiler knows where the next and the next-next members and stuff like a possible vtable reside and translates any code referring to them as memory offsets in cpu instructions.
And it's the same for static methods. If there are static members to a class, the compiler will do some offset-play for you. Yet, there is still some class-object pointer passed as the first parameter of the method you called.
The only difference is that static objects are not destroyed like auto-objects on the stack and kept in memory. The only truly meaningful scenario might be when an exception is thrown and stack-unwinding proceeds to delete a million pages due to a billion objects on the stack because it must go so far down the stack - neglectible by the way, that if that happens, you will have encountered a totally unexpected error and you will have programmed your application to exit gracefully at that point - so who cares?.
In sum, struct Car { static int foo; static int getFoo (void); } is different to struct Car { int foo; int getFoo (void); } only in so far:
Note that 4 bytes are far less than a page in ANY relevant operating system (4k), so it's likely that deallocation of the memory used will be a effort to get rid of many more such objects.
People need to get over the notion that object oriented programming and objects are something bloated. Sure, you can make it bloated. But that's solely because people don't know jack about programming. It's not because an integer is suddenly 4kb.
HMS
Admin
Check the title of the article: the great Wilhelm himself granted that spending $50 and half an hour would have been an option too, compared to employing a team of programmers for 5 months.
Plenty of out-of-touch-with-reality nerds going crazy in this thread. Nerds that would be unemployed pretty quickly, having destroyed the companies they worked for, if they had a say in business decisions.
Admin
Luckily, nobody cares.
Admin
Using memory! bah it will never catch on!
Admin
Does your CIO (or other manager) allow you to buy RAM at Walmart? Don't you have f.e. Compaq which needs "server" RAM? OK, $1500 is a bit too much but I'm sure you won't get anything certified for 100 bucks.
Admin
But what difference does it make? You're passing an extra pointer to the instance, but it shouldn't affect memory usage or speed, or reliability enough to worry about.
Admin
They just cared for their pay-check. The "Who cares?" people wouldn't care for more. And who would ask programmers, anyway? Since they are stupid and have no clue about how business works this is not even an option.
Admin
I know Lyle. He would do it with 1 bit represented by his finger being horizontal or vertical. 4k is for poofties.
Admin
Paper is Write-Once-Read-Many (WORM)
Admin
That is called sabotage.
Admin
Reverse penile measuring, mine is smaller than yours ? i began with nothing, opend my eyes one morning and thourght "Hello world"
Admin
I'd do both. Buy more RAM but also spend at least some time improving the memory management of the program.
What's worse than that he just choose optimisation rather than better hardware is that nothing mentions space complexity or whether or not the increase in memory efficiency affected time complexity adversely or not.
If the comparitive space complexity is simply O(n)=n*0.46 then not getting a ram increase and only improving memory management a little is stupid.
The complete lack of mention of any space complexity is what makes this guy look incompetant. Any idiot can for example pack eight boleans into a byte rather than one per byte.
Admin
The only problem with it: optimization must begin earlier than that.
Memory footprint reduced to 46% means: nobody cares about performance until it's too late.
"Hardware is cheap" is no excuse for crappy software.
Admin
Very few people are willing to PAY for beautiful code.
Admin
3.5k? Decadence! I started writing my code out longhand on a legal pad with a felt tip pen.
Admin
That is true. People write methods like that all the time.
However, the implicit point here, that this makes your code slower, only holds in certain languages (e.g., C++), and even then only in certain situations (e.g., calling some such methods in an inner loop).
However, some language implementations will detect when one such call is a performance hotspot, analyze the graph of classes loaded into the system to figure out that there is only one method that the call may be dispatched to, and inline the instance method into the relevant callers. Java Hotspot does this; therefore, that kind of method is unlikely to be a performance bottleneck in a Java program.
Admin
His point is probably that you can't then mock out that function and replace it with some other implementation for unit testing...
Admin
To all the oldtimers here: 16K is for wimps. I have to do complicated routines with 1K TODAY.
Atmel AVR, grampas!
Admin
Admin
TRWTF is that some developers think that "throw more hardware at it until it runs" is the solution for everything. They had crappy code, tons of unclosed connections, but hey "just install more RAM, duh!"
Sorry, but this kind of behavior is the exact reason sites like this exist in the first place: Because crappy programers write crappy code and then just throw more hardware at it. At my old job we constantly had to deal with the SQL server having to be rebooted, because my predecessors didn't care about closing connections either. There are only so many connections in the connection pool, though. As soon as I mentioned the word "connection pool" I could see that no one else had ever heard about this.
Admin
Hello Guys, Glad to Join! :)