- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
HAHA, Karate is for kids and sissies.
Admin
Do you really think we didn't profile it? We are continually profiling and performance testing our software. And no it was a cumulative effect. 200 debugs per request * average 4 string concatenations = big performance problem.
And 10% is a lot in a mature product. In most cases you are not going to be able to profile an app and find an easy 90% optimization. Your experience with toy apps might differ but when a product is a few years old 10% is a very good improvement for very little work. Really is if(logger.isDebug()) logger.debug("blah: " + someObj); so much worse than logger.debug("blah: " + someObj);
Obviously it is not ideal but it is hardly Duff's device.
Admin
That would lose you the most important advantage of immutability, namely thread safety without synchronization.
Because if it's performance-relevant at all (which it most likely isn't), the concatenation is not the problem,
creating the strings via some toString() method is. You also want lazy evaluation for method parameters?
And all this for one specific programming idion that's not actually a problem most of the time...
Admin
Actually now I think about it this could be very easily avoided if the log libraries supported printf syntax and you passed in objects instead of strings. Oh well.
Admin
I notice that you conveniently skipped his second point which is the point that most of us that are against stupid "always do X" rules that muck up the code are trying to make.
Admin
Admin
They only did the optimization after a few years, and only when they noticed they could get an edge from it. I could hardly call it "always do X".
Admin
Your optimization is wrong. If you cut yoru trip time in half with a short route the corvette takes 5 seconds, while the camaro takes 5 minutes. This is very significant to the camaro driver, but insignificant to the corvette (unless you make the trip often).
So optimization is more important in slower languages.
Doubling the speed in the real world is common. Taking 5 seconds off no amtter which language is rare if not impossible.
Admin
It's too bad they don't have compiler pragmas.
Admin
Admin
You dumb shit - learn to read. I said that if you can do an optimization, you take 5 seconds off the time for both the Corvette and Camaro. Wow - you are really stupid Hank.
I can't believe I have to spell this out for you. In the case of the Corvette, you've gone from 10 seconds to 5 seconds. In the case of the Camaro, you've gone from 600 seconds to 595 seconds.
LEARN TO READ MORON!
Sincerely,
Richard Nixon
Admin
I can read. I rejected your entire argument because it is not realistic. I can think of no optimization worth doing that will take cut the runtime of a fast program in half, but when applied to a slow program will cut less than a percentage.
If you can go from 10 seconds to 5 with the corvette, you have made a 50% imporvement. You should be able to make the same imporvement with the camaro, which would take you from 600 seconds to 300! The only way to cut travel time by that much is to find a more direct route, an optimization that applies to both.
Real world optimization should focus on algorithms because that is where real gains are made.
Now if you had shaved .0001 seconds off the corrvette runtime I would accept that the equivelent optimization on the Camaro would shave nothing off the runtime. (for instance replacing the aircleaner with a high flow version might help the Corvette, but not the Camaro)
Remember what Knuth says about optimiation though before you read too much into this.
Admin
I wanted to try and cap off this conversation with a summary of points, so maybe we wouldn't go off arguing again the next time a Java WTF is posted. This is generally applicable to .Net as well.
Admin
Are you unable to argue without name-calling? But have it your way:
If your software is such a toy app that 800 string concats are 10% of what it does in a request or you are so incompetent that you need 200 debug statements to figure out what happened while handling a request that does hardly any work, then fine, it works out well for your. Going from there to "anybody who doesn't do the same thing is a moron and has clearly no experience" is... rather moronic.
BTW, if only morons think doing it everywhere is a premature optimization, why were't you doing it right from the beginning? Why did you notice that it gave a whopping 10% boost only after a few years?
Admin
True, but when you add garbage collection (which you can do in C++), you loose the ability to do RAII. (Well you can do it, but only to a limited extent)
If you write C++ like it is 1995, then Java is a lot better. C++ programers have learned a few things since then to make programming better (not to mention compilers had to learn how to do templates and the like, and things like the STL had to be written)
Either way, I prefer to work in python where performance doesn't matter, which is 99% of the time. Sure my programs take 20 times longer to run, but for the most part they still finish as soon as you hit return, or failing that spend most of their time waiting on something else that even hand optimized assembly would be waiting on, and thus the same speed.
Admin
Meanwhile, I have thought of a few. Basically, anything that takes a fixed time and happens only once per measurement would qualify. Such as JVM setup (Sun has done some improvement there in Java 1.5) or reading in a config file for a command line utility where you measure one program run, or making a network call with big lag to an external system for aserver app where you measure the time per request.
So it's not entirely unrealistic, but still far less common than improvements that are proportional to existing overall speed, I'd say.
Admin
How... amusingly optimistic
Admin
Ok, please just ignore the name calling. I was a little annoyed, sorry about that.
So the performance equivalent of 8000 string concats for a request is a toy app? Anyway as you said yourself (and I oversimplified) most of the work for these debug calls is going to be in toStrings. So what appears to be 4 concats is really much more work. Anyway the level of logging, while it may seem excessive to you, is actually considerably less than i have seen in many enterprise apps.
Why wasn't it being done from the beginning? I didn't work here :P. Really it wouldn't have been 10% originally but as the core logic was optimized it became a larger factor. Having learned from this case I would expect most server type apps to benefit from this optimization and wouldn't hesitate to use it for apps with frequent debug logging. For other apps I wouldn't bother (e.g. gui apps, anything where performance wasn't hugely important).
Again I never said it was suitable for all cases! Please stop attacking a strawman and consider your own posts.
But I stand by my statement that those who considered it a premature optimization and assumed the developer never profiled the software without any evidence for those beliefs are... well not the most insightful in this case.
Did the original post say it was a law in that company? No. Was there any reason to believe the developer had not profiled his app? No. Has this optimization proved useful in other similar situations? Yes. Assuming that it was a premature optimization is completely unjustified and demonstrated a lack of familiarity with server class software written in Java.
Admin
Thanks for posting the correct answer so early. If this is true, the code reviewer is every bit as bad as the person writing the code. Either that or the person writing the code is smarter and the response is a joke at the reviewers expense...
Admin
As an example of this being considered useful here is some code from Tomcat:
if (log.isDebugEnabled()) {
log.debug(Localizer.getMessage("jsp.error.compiler"), t);
}
Admin
From jetty:
if(log.isDebugEnabled())
log.debug("parsing: sid="+source.getSystemId()+",pid="+source.getPublicId());
Admin
One of the things that is lost on non-Java developers is that this is a standard idiom. Part of the reason that the code reviewer rejected the original code is that it deviated from the idiom for no justifiable reason. It is "easy to read" because it appears like this in virtually every application that used log4j or the apache commons logging framework. It is faster because the framework has been designed in such a way that this will be faster. It is not a premature optimization because this is the way it is supposed to be done in the first place.
Admin
I have to dream the dream.
Admin
Please read before posting: "Second, you'd most likely have gotten 9,5% of that improvement by doing the change in only a handful of places"
You are talking about time I was talking about location.
Admin
There's a distinct difference between what Knuth calls "premature optimization" and programming sensibly. Pissing away clocks logging debug info isn't sensible, and waiting for a profiler to tell you obvious things like this is doubly insensible.
Admin
It all depends on your problem domain. If you are always writing OS kernel code, then I might start to agree with what you said. Very many problem domains are not CPU sensitive. What you call "pissing away clocks" I call focusing on the real task at hand. If you look back 20 years, the pissing away clocks argument keeps changing. As hardware and software get more powerful, we are spending less and less time worrying about "pissing away clocks" and more time worrying about getting things developed quickly and correctly. What you think is pissing away clocks I think is pissing away developer time.
I think requiring the if isdebug in all situations is insensible. I think this is exactly what Knuth is talking about.
Admin
I'm going to repeat the first Anon's request here.
Admin
I wouldn't say they are retards, but I would say they does have a wrong viewpoint.
Yeah,*-)nice amount of speed increase, but you forget one thing... You are using damn much precious time writing those one line extra per debug log message more! And java doesn't even supply to tools to help with it.:@
In other words, you people are just changing speed into readability and time by doing so. Premature optimization is evil indeed, but if you guys want to do that, you don't have any other way around. I'm happy I don't have that problem anymore. 8-|
Admin
You pansy! Y'all do know that vim stands for, "vi IMPROVED," don'tcha? Why, I'll bet thasseveneasier to use than standurd vi!
...real men edit their text files with cat and echo. ;)
Admin
Hum, did you read HIS post? he says is was a cumulative effect, so it is not detectable with a profiler :)
You're right about not reading, I was confused with a post from Brazzy that said " why were't you doing it right from the beginning? Why did you notice that it gave a whopping 10% boost only after a few years?"
Admin
I had to read that about 10 times before I realized the poster meant performance and not perforce (source control).
Admin
Actually, you must have misunderstood me because I was saying something similar:
public void LogDebug(string message)
{
if (DEBUG != true)
return;
// do logging
}
Thanks,
Shawn
Admin
Admin
Sorry, but I'm no one's shill. Some references I checked mixed up the versions when Java gained generational garbage collection (it was 1.3.1, not 1.4.1), so it was essentially a dead heat, although Java's was far less performant than .Net's until later 1.4 and 1.5 updates.
That doesn't make you any less of a twit for coming in blazing when my post concerned only one feature, which was not in fact copied from Java.
Admin
The first time I read this I thought he said bust in your face ... [image]
Admin
Ok, maybe I was overly argumentative/dismissive. Sorry for that.
However, it is not the case that you can't do things in a threaded environment if you don't have shared memory, it just requires a different style of programming. For the Erlang example, the very reliableand scalable Ericsson telecomm switches are a good representative of the fact that it is a viable platform for large, performance-sensitive, long-running applications. Also, there is a nice, distributed database for Erlang, called Mnesia. For some interesting discussion about reliable computing, see Joe Armstrongs PhD-thesis (Armstrong is one of the designers of Erlang).
Another nice example is DragonflyBSD. Although it is not finished yet, it does work, and the infrastructure is in place. The kernel in this OS is based on very simple light-weight threads that communicate through messages.
My point is that the advatnages of rethinking in message-passing, over using a shared memory, is so big in architectural complexity-reduction, that there are very few places where you would want shared-memory. Viewing shared memory as the "dangerous feature" of threading, akin to goto's of control-structures, is quite a good analogy, IMHO.
Admin
Because Java performs well, and is reliable.
Duh.
Admin
I'd much prefer 199 excess debug statements in my logfile than no debug statements logged at all because most things don't fail. I've spend a lot of time trying to figure out why some program failed at a customer site.
Wading through thousands of lines in a log file is boring, but at least when the customer has problems (I am not a perfect coder, and my co-workers are not either) I can figure out what was going on.
The only problem I have with 200 debugs for a simple transaction is if it happens often enough that these logs obscure (perhaps because of log file rotation) some other problem elsewhere, or they make the program slower than acceptable.
Logging is good.
I have had to tell customers too many times that I have no idea why their job failed. So I keep adding more and more logging until I can figure out what is wrong.
I have seen log errors that work out to something like "Error, 2+2 == 5". Hardware is not always initialized when someone says it is. So I reject all arguments that can't-happen cases should not be checked for. (though I would make the checks)
Admin
You obviously haven't been involved in many large server side applications where you could easily have a few thousand clients calling the code every minute!
Admin
Looks like the dust is finally starting to settle on this... wow, I've been reading this site for quite a while and I've never seen such a flamewar (although the debate on the "interview WTF's" a few weeks ago was close). I'm the Josh in the post, and I stand by my review suggestion, even in the face of the debate that's raged here for the last couple of days. I agree with Knuth's statement that "premature optimization is the root of all evil", but there's nothing premature about this optimization - it's actually kind of a non-issue in Java circles. Or, were you really implying that we should re-check this standard optimization for every program we write and then go back and re-apply it after the profiler (inevitably) tells us that, yes, this is a good optimization? Yikes - talk about wasting programmer time.
Although string concatenation can be a time waster (from "Effective Java", item 33 - "Using the string concatenation operator repeatedly to concatenate n strings requires time quadratic in n") my biggest concern at the time was memory usage; this logging pattern creates an awful lot of objects that need to be GC'ed (this was on an older JDK, though - this WTF occured about 5 years ago).
'Twas a fun flamewar, though. Wait until tomorrow when I submit the WTF about the guy who put his curly braces on the same line as the if statement instead of on the next line.
Admin
<font size="4"><font size="3">
Still very wrong. The first release of .NET 1.0 was in early 2002, as was Java's 1.4, whereas the first ocurrence of generational garbage collection in Java was in 1.2.2 (released mid-1999) - there was an -Xgenconfig command line option to configure it. At that time, .NET hadn't even been announced yet.</font>
</font>
<font size="4"></font>
<font size="4"></font>
Admin
IFYPFY. No need to thank me.
Admin
Who knows because we don't see all the code here, but if this code is using a current logging package you wouldn't necessarily see that at all. So in a production environment you'd set your logging to error only or something and you'd see none of these messages in your log
Admin
Begun this flame war has.
Admin
That's why I do this:
/DBG/if(lg.don)lg.dbg("A log message that doubles as a comment");
With most syntactic coloring, the green /DBG/ helps easily distingish log/comments (and sufficent debugs are often all you need to comment code) and reduce the clutter.
To execute a test to see if debug is on is as close to the atomic, smallest computational unit, which is why I say O(1) not O(c), and when comparing the dbg-on operation to string concatenations, which should require at least one pass through the strings and objects, plus object instantiation overhead. I don't know why I said O(2n), that's probably wrong, maybe I said that to account for additional conversions and instantiation. IIRC, big-O is a relative measure, not an absolute one, although it is obviously used to guess at absolute execution time once the hardware parameters are known. The relative nature is necessary considering how fast computers have improved in the history of compsci.
I think O(1) refers to a single-number operation, such as multiplying two numbers, assigning a pointer, or checking a numeric value/flag.
Admin
I make my beans in java. But...
The dynamic recompiling bytecode interpreters seem to eventually produce native translations of bytecode that will run faster than the output of many C++ compilers. But it still doesn't address java's woeful track record with memory hogging, and that it takes a few run-throughs of the code to reach the optimization sweet spot. In server code that isn't (theoretically) reloaded all that often, this optimization will occur. In desktop apps where someone loads up an app, does a quick operation, and closes it two minutes later, that isn't true, and java sucks at that.
And considering how long it still takes to run an applet in a browser (I just upgraded to JDK1.5, and that seems to slow the applet load in Firefox to a 10-minute startup crawl) which is just ludicrous when you look at what Macromedia can do with flash, and java will never kick it's bad-performance rep. I mean, seriously: ten minutes to do a basic file selection applet. Ridiculous.
Finally, keep in mind that different programming languages don't really have fundamental advantages in speed, it comes down to how good the compiler is at generating optimized machine code. Although some languages must be easier to write fast assembly for (C/C++) than others (LISP), but the fact that "interpreted" languages can be brought on par with compiled code via optimizing runtime engines is a significant, positive development in programming technology
Admin
It's runtime interpreter caches what the machine code compiles java's bytecode, and supposedly profiles its execution to determine how to run it more quickly.
I don't know this for a fact, but I could guess that it identifies the most frequently executed loops and does memory-expensive but performance-boosting machine code generation for that stuff, but the rest of the program is more conventionally cached.
Since the runtime interpreter can use real-world, practical results to optimize code (and can recompile parts on the fly), it should theoretcially be better than a compiler, which had to make its assumptions at compile time without any sort of real usage analysis.
Admin
I have been arguing the "Java performs well" side of this argument. I'm a java fan, not a java hater.
Many static compilers can use profiling data to improve the optimizations. The cycle is compile-profile-recompile. So C++ developers can, with not insignificant effort, do this as well. I suspect that most don't unless they are devleoping for an embedded app where every cycle counts.
The plus for Java here is not that Java can do this, but that this happens automatically without any additional developer effort.
Admin
Please provide an example where machine code can be used to write a faster program than assembly, assuming that the assembler has all available instructions.
Admin
Sorry for being late to the flamewar and all, I'll try to be on time next time. Until then, read this, and since the inevitable objections are bound to come up, be certain to read this as well.
I'm not saying that these benchmarks are the final word on the matter, but I challenge you to explain what kind of benchmarks you'd like to see that aren't included there, or to demonstrate that Lewis and Neumann are "absolutely clueless about C++".