- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
There is nothing wrong with wrapping a few of the debug statements if profiling shows that this is slowing the program down. Otherwise don't bother so you may turn logging on while the program is still running. This can be a lifesaver.
In this case we have no inkling if the edict to do this was based on profiling or just "past experience." There was no need to do them all, that's clear.
Personally, I rarely debug using log messages; I spend that time writing unit tests and it lets me sleep at night. MBFB!
Admin
I think when they wrote "... do something expensive ..." they meant something like a database lookup, not concetinating a few strings.
Writing an if before every logging statement is redundant, thus bloats the code, makes it less readable and introduces a new chance to introduce errors. If I cared so much about performance that concetinating a few strings makes the difference, I would use a preprocessor to completely remove all debugging code for production releases (unless I want to be able to switch it on during run). This saves a even more cpu cycles and doesn't hurt readability.
Admin
I agree 100%. The "Java Advantage" is one based on real-world behaviour of development organizations, not theoretical best case. The only performance advantage Java has in best-case scenario is that the design of the environment makes heap fragmentation something easy to handle. C or C++ at their core don't do this.
And as one would expect, the applications where Java is most often used successfully are server/middleware apps whose long runtime means that start-up time and first runs are proportionally less important. Fortunately for Java this is a broad category of applications, enough to make the language popular in spite of its drawbacks in other areas.
Yep, I'll agree. A semi-persistent dynamic compilation model would be far better.
Don't forget that, for better or for worse, Java provides a very rich set of standard libraries. This makes it easy to develop code. Of course these libraries are generic, and frequently won't run as fast as a hand rolled solution to each problem. But they are also well tested. For most problems, fast development with minimal risk at reasonable performance levels trumps slower development at higher risk but high performance levels. We have a generation of developers who don't need to know how to implement a queue, a stack, a hash table, etc. But we also have a generation of developers who won't screw up these constructs either. It is a mixed blessing.
Admin
Even so, are you ready to go back and re-optimize when that server is replaced with a newer one in a year or two?
Admin
THANK YOU!! IT'S ABOUT TIME SOMEONE NOTICED THIS!! :)
Admin
Please ignore that line--I was thinking of "#ifdef DEBUG" in C. As written, it is still possible to do this.
Admin
Could, ANYONE, show me an desktop application written in java that isn't a memory hog and runs at an acceptable speed??
Admin
Squirrel SQL?
Admin
Actually, while it is much better to have an if-test in the code that attempts to log something, as opposed to not having a test, it is much more practical to actually have the if-teset inside the log function itself. That is my coding practice: to factor it out so there aren't so many duplicate lines. Then, we can simply just log the data and whether or not it needs to be logged will be determined in one central place. Some may argue about the performance of making the call into the function and then testing the if-condition but... it most likely will be better than performing the log itself, anyway, especially if a database call is being made or a file system write... and if not, then sometimes, while performance is important, so is the readibility and maintainability of the program. I think it reasonable to sacrifice performance for convenience if there's a compelling argument, and I think this is one such compelling argument. The productivity gains are monumental, the maintains burden lifted somewhat, and encapsulation is being observed.
Thanks,
Shawn
Admin
HAHA HAH HAHAHA REALLY!
Whose benchmarks, Info-World?!
So running a program in a VM is faster than on a real machine?
Let me know when you finish disproving more of CS's formal theoretical foundations.
Admin
The problem with this approach in general is that String concatenation is a relatively expensive operation, and you still end up building strings regardless of whether the logging is turned on or off. This can have a much more noticable effect on performance than extra boolean evaluations.
A middle ground is to define a binch of log functions like this...
void logDebug( Object o1) {
if (logLevel < Log.DEBUG) return;
printLogMessage(o1.toStting());
}
void logDebug( Object o1, Object o2) {
if (logLevel < Log.DEBUG) return;
printLogMessage(o1.toStting() + o2.toString());
}
void logDebug( Object o1, Object o2, Object o3) {
if (logLevel < Log.DEBUG) return;
printLogMessage(o1.toStting() + o2.toString() + o3.toString());
}
...
I'd do this for some reasonable number of parameters up to five, then I'd implement something like this to handle any other cases...
void logDebug( Object[] o) {
if (logLevel < Log.DEBUG) return;
StringBuffer sb = new StringBuffer(o.length * 50);
for (int i = 0; i < o.length; i++) {
sb.append(o[i].toString());
}
printLogMessage(sb.toString());
}
Admin
Nay, real men code in Fortran and patch the compiled binary with Superzap.
And they don't use any fancy IDE or text editor, TECO is their tool, and the only one worthy of their prowess
Admin
Did you bother reading any of the discussion so far or did you just shoot your mouth off because you wanted to look like an asshole? Here is a hint - lookup JIT.
Admin
When you were forced to the DOS world did you use PMate?
Admin
<sigh> unfortunately the answer is that we don't optomise at all (of course, we don't refactor either... or write well to begin with... but that's another story), I was simply curious - given that the example was that of requiring multiple chip-based versions.
You make a good point, of course. I'd hate to have to redo any work.
If we're talking about the optomisation being done by the programmer, though, I'd be guessing that such optomisations shouldn't go down to the chip-level - they'd be more generically applicable simply to the algorithm used.
Chip-level optomisation should be left to the compiler IMO and if we're talking about the optomisation being done by the compiler - does it matter much if I change chip? I'd recompile on the new chip and it'd be optomised.
Have I missed something obvious?
...and I'm still curious if there are profilers for C programs. :)
Admin
In short, Java got a great performance boost by copying .Net. Cute. (Actually I'm still all for any performance enhancement to the language, since I have to use it now and then, but it's not anything especially revolutionary by the time they got to it. Even in .Net is wasn't, but at least it was new to popular languages.)
You missed the fact that logging isn't off, there's just different log levels. (logdebug, loginfo, logwarning, logerror...) Probably based on syslog or something similar. So as long as it's at least at Info level, but not Debug, it'll still log all those inane messages. ;)
Almost every audio/video processing software made by anyone worth a damn builds per-CPU optimizations into a single binary and selects at run time the most efficient paths. (Although I do occasionally still see per-CPU binaries.) Although for best performance they have to be hand-made at the assembly level, just using Intel's compiler on straight C/C++ with the highest optimization options selected (and several CPU marches) will give you seperate code paths optimized for several different architectures simultaneously.
Of course there are C profiles. There are VB and Fortran profilers! You don't hear too much about C software, because the world has mostly moved on to flashier things, but I believe Borland had a good one (stuck in their overpriced enterprise IDE), MS has a very basic one, and bought a company that made a very powerful one. But there are others I've seen here or there, I've just never had time to look into it; many that work with C++ can be coerced to work with C.
</sigh>
Admin
HA!
Admin
I've heard that intel "optimizes" for the non-Intel processors in ways that are horridly inefficient. The example I was reading about was a strcpy or memcpy function where the for-Intel code path copied it all at once where the non-Intel path copied one byte at a time. (before everyone jumps on me for this, it's only a semi-reliable source).
Admin
Sort of. While early JVMs were interpretters, modern JVMs are really hybrids. They are both interpretters and compilers. When the JVM identifies a section of code as important and time sensitive, it compiles that section of code, aplying all the optimizations that a compiler would normally apply, and then executes the compiled section from then on. Since compilation happens at run-time, the best possible optimization for that particular hardware can be applied. The programmers don't even need to think about it.
</sigh>
There are some links through here... http://en.wikipedia.org/wiki/Profiler_%28computer_science%29
</sigh>
Admin
This is why I've held off posting in these forums for so long. It's hard for some people to disagree with a post and post their reply without being an ass.
My point was not to assume he was correct in his assessment of Java, nor to assume he was incorrect. His point was flawed in the thinking that by choosing a slower technology, no time should be spent on performance because you should have chosen that technology in the first place. Regardless of which side of the argument you fall on, there are advantages to using certain technologies that may be worth trading performance for. That was the point of my metaphor. If you can't afford the Corvette, buy the Camaro. You will still want the Camaro to go as fast as possible, without someone telling you that if you wanted to go faster, you should have bought the Corvette.
Admin
This is where CPP macros are actually useful. (and they work just as well with Java...)
#define logDebug(MESSAGE) do{if(debugEnabled())reallyLogDebug(MESSAGE);}while(false)
Now all of your debug stuff is wrapped in ifs without having all that clutter.
Another interesting thing you could do in C++ for this specific problem is to write a string class with O(1) concatenation. For that matter, you could probably do the same in Java, but it'd be more expensive and uglier. That doesn't deal with other expensive things you might want to put in the argument list though.
And in terms of profilers: you can get AMD's CodeAnalyst for free from their web site and MSVC 8 has "profile guided optimization," where the compiler will do low-level optimizations for you based on profile data.
Admin
[NOTE: I write highly optimizing compilers for a living :P]
All sane C++ compilers have profile guided optimizations.
They just require feedback runs.
JITs really aren't that interesting for C++ unless you use heavy dynamic loading, because it won't tell you anything you can't get through simple profiling feedback or whole program optimization (which Java lacks in general, because of dynamic class loading).
Also:
Code paths don't randomly change in long running server applications. The only thing dynamic profiling (IE the JIT profiling for you) really buys you is not having to do profiling runs on your own, which you probably end up doing anyway because the JIT doesn't make it fast enough. It's hard to do good optimizations in the time you have in a JIT. No JIT i'm aware of will do heavy data reordering and cache transformations that a good C++ (or fortran) compiler will do, and none of them really do anything interesting with the profiling feedback but recompilation, partial devirtualization, and maybe a small amount of value profiling. And they do that that's because they have to in order to stay competitive with static C++ compilers. They arent competitive at all with C++ compilers given profiling info.
Real companies (IE ebay, etc) don't randomly put applications into production, they test the hell out of them in real circumstances, which is a perfect time to gather profiling runs and feed it back to our compiler.
Admin
Google "C profiler". You'll be surprised.
</sigh>
Admin
Yes, basically it is. However, an anonymous class in Java can only access 'final' variables from the surrounding method.
Admin
GCC has a reasonable profiling-switch, with output that gprof can analyze. It can also use the profiling-information in profile-guided optimizations.
</sigh>
Admin
Fianlly somebody with a good grip on the facts!
Admin
EVERY-ONE knows that real programmers use VB6!!!!!
Admin
Why do you stat that Java has excellent multi-threaded performance? Iäve done some research in the are of concurrency and paralellism, and I must say that my experience is that Java has very bad multi-threaded performance. C/C++ gives you good performance, Erlang gives you enormous amounts of threads, and if you really want to look into nice languages, take a look at AKL and Oz for some serious concurrency/distribution.
Furthermore, the Java specification is really bad, although they fixed some issues with the new memory-model in Java 1.5. The basic problem is still that Java prescribes a shared-memory model for threaded applications which puts an inherent limit on the scalability of the applications, as well as imposing an inferior model on the programmer.
Admin
Hallelujah!
Admin
Haven't we all been there?
Admin
Duh...
Work on JIT began on Smalltalk in the 80s', and the current Hotspot JIT comes from Java's Self group.
The work on Self's JIT started in 1994, was working reliably in 1996 but the team then got redirected to create a Java JIT interpreter.
This means that the work on Java's HotSpot JIT began in 1996. I'd hardly call "copying .Net" having it's feature before it was even born...
Admin
Youz are an incompetent, ignorant, idiotic Microsoft whore.
Sorry, but a statement that wrong deserves nothing but abuse.
.Net copied Java, NOT the other way round.
Admin
I rather doubt that many people would agree that shared-memory multithreading is an "inferior model". Any other model would put severe limitations on what you can do with threads. I mean, it sounds impressive that Erlang can spawn threads like wildfire, but not so much when you realize they can't do many of the things people have come to expect from threads. It may be that you can adjust expectations and programming techniques to live with the limitations and profit greatly from the advantages, but I don't think that amounts to a fundamental superiority.
Admin
Try Puzzle Pirates.
Admin
It typically becomes a performance issue (and easily well below 100k calls) when assembling a large string in small increments this way, since for every concatenation, all content gets copied to a new string and the old one gets thrown away. This results in O(n^2) execution time AND O(n^2) total (not simultaneous) memory usage, resulting on massive strain on the garbage collector.
But if you're not building a large String in a loop, it's pretty irrelevant. Unfortunately, as with most performance advice, it's often repeated in a dumbed-down version ("never concatenate Strings with + !!!") as a "rule" by people who don't really understand the issue.
Admin
Initial start up is faster for C/C++ but runtime allocation of memory is faster in (recent) Java. Thus for long running applications, Java wins.
Justin.
Admin
Or for a heavier(?) scale program, IntelliJ IDEA. You'll even see the amount of memory it's taking at all time.
Admin
Never heard of macros?
(defmacro debug-message ( expr )
(´(if (debug_on) (debug-send ,expr))))
(debug-message "why not taking both pretty things?")
I'm not pro in this so don't yell if it gone wrong ;)
"Lisp lover.
ps. I'm wondering why people uses all else but lisp at global. Most abstract language ever made and nobody uses it! :|
Coding with anything else is bad coding practise. (I'm still doing so though 8-| So I'm not good programmer myself... )
Also you like to notice, this'd be optimizing in lisp.
I'm guessing somebody spills on those parenthesis next and I'm having fun when they does so...
Admin
(((I'm) not ((quite) sure) but ((there) must) be ((a) reason)))
Admin
Arght, I now officially hate the forum software like everyone else does - it ate the rest of my posting, which was this:
It's a massively multiplayer online game with a Java client. Runs at perfectly acceptable speed and hos no more memory than everything else does these days.
BTW, Much of Java's memory hogging results from its reluctance to let go of memory once it's gotten it from the OS, since it will often need the memory again soon and getting and returning memory from/to the OS is much slower than Java's internal memory management. You can adjust the level of free memory the JVM will hang on to; the default setting is unfortunately rather high for an app that does not have a server all to itself.
Admin
That's why it sucks so much.
Admin
Probably because Lisp has a frigging steep learning curve and a syntax most people can't get to reading for it's completely unnatural and very far from human languages (Lisp and it's syntax come from pure maths after all).
Add to that that very few editors do a good job at handling Lisp syntax (to be fair, only Emacs manages to format Lisp in a somewhat readable code) and that it's far too abstract conceptually wise (even when using Lisp dialects such as CLisp or Scheme, nobody uses "raw" Lisp anyway) for most of the population, even the programming-aware subset (to quote John Focerado, <quote>Lisp is a programmable programming language</quote> after all.
On the other hand, Lisp is indeed the most powerful language ever created, the one with the most features and flexibility (saying that other languages have spent the last 40 years playing catch up with it is nothing but true) and a really fast one on top of that (if you compile it).
But it's just too abstract, and too... foreign...
You can grok the basics of most languages in a pair of hours, not so with Lisp.
Admin
Christ, this is ugly. No, not the WTF itself, that's stupid. The language is ugly.
WTF does string concatenation incur a copy, even is the string is never used? Strings are immutable anyway, concatenation should be done lazily, and therefore be O(1).
WTF is there no method that takes the log level and more_than_one string as parameter? It would check the loglevel internally and then concatenate if it needs to.
If nothing else works, this recurring idiom of always the same if-statement and a logging call cries out for a preprocessor. Not that Java would have preprocessing built it, but it's only one Ant rule away.
Josh is an idiot, because he cited performance concerns and didn't profile the program.
Now excuse me while I barf in Jim Goslings general direction.
Admin
<FONT face="Courier New" size=2>anyone got a sewing machine? because alexis is RIPPED!</FONT>
Admin
The pre-processor argument doesn't really hold water. The nice thing about the logging frameworks developed for java is that they can be enabled and disabled at runtime for both the entire application or specific classes and packages. A preprocessor only helps if the logging is specifically related to debugging as part of the development cycle. Once software is deployed on a production system, debugging via recompiling becomes a very difficult option.
Admin
You should have held off a little longer; forever would have been a better timeframe.
Listen, I'll spell it out for you as plain as I can.
Again, I am assuming that the original assessment of Java is correct and it will be slow. (This is not something I agree with.) Java is the Camaro, to use your lame metaphor.
Let's say the Camaro takes 10 minutes to get somewhere and the Corvette takes 10 seconds to get to the same point. Now, by doing some optimization to the Corvette and the Camaro, 5 seconds can be shaved off. That cuts the Corvette's time in half but reduces the Camaro's time by less than 1%.
You see, there are choices as to where to use your employees' time. And I know what you're going to say: over time, the savings will add up by modifying the Camaro to be something substantial. This is a calculation that you have to do; consider the expected lifetime of the application, the other areas that could be addressed, and the improvement you can expect.
Your metaphor was dumb.
Sincerely,
Richard Nixon
Admin
Wow. That was clever. What a zinger! You really got me there.
Hey, did you happen to fight anyone with your karate skills today?
Sincerely,
Richard Nixon
Admin
I'm enjoying today's flamewar more than the actual WTF.
Admin
Wow there are so many "premature optimization" retards on this thread it is amazing. You guys have clearly never worked on java server software. Where I work we implemented the same optimization (without the wtf) and got a 10% improvement.
When someone says performance critical Java software it is pretty safe to assume they are talking server software and in any server software there will probably be thousands of debug statements, many of which, as has been said before, will be calling expensive toString methods. You would have to be a moron to just assume that it is a premature optimization.
For all they people that said they would never work in a place with that rule, don't worry you are never going to get a programming job but will be working as a linux sysadmin, thinking you are a great h@x0r cos you wrote some perl scripts, for the rest of your days.
Admin
The retard is you for thinking this justifies using the "optimization" always and everywhere. First, 10% performance improvement is not much at all - in most applications you can get a LOT more by optimizing the hotspots identified with a profiler. Second, you'd most likely have gotten 9,5% of that improvement by doing the change in only a handful of places (again, the hotspots identified with a profiler).