• (cs) in reply to Richard Nixon
    Richard Nixon:
    Mung Kee:
    Alexis de Torquemada:
    Mung Kee:
    Hmmm, Alexis, I never would have taken you for a conspiracy theorist.  See my earlier post....I have yet to see a benchmark that wasn't slanted one way or the other.  What's worse is the morons here, most of which likely have never read one, passing judgement regardless of whether they know anything about the "other language".


    I've read and performed some benchmarks comparing various C++ and Java implementations, and in most of them, C++ was slightly to dramatically faster (though D won some ahead of both others) even when taking JVM startup time into account. The others were typically biased against C++ by means of utter cluelessness (e.g. strdup'ing a string only to discard the result), whether malicious or not. There are numerous theoretical reasons why C++ cannot be slower than Java in certain respects, not the least of which being that everything the JVM does can be done in C++ as well. Java implementations have become surprisingly fast given that at one time they executed many programs dozens of times slower than equivalent C++ programs, but Java still is almost never faster than a good C++ program run through a state of the art compiler, as some people like to claim (on this board and elsewhere). It's simply a myth. There are many good reasons to use Java for long-running applications, and even some short-running applications if you're smart enough to use GCJ for that purpose instead of that JVM bloatware. But, sorry, "because it's faster than C++" just isn't among. BTW, I've used both languages extensively, in numerous implementations, so it must be someone else you are talking about in the last sentence.


    For the record, I wasn't grouping you with the morons.  It was a general statement.


    You're goddamned right you weren't. Otherwise he will bust your face in! He knows karater after all.

    Sincerely,

    Richard Nixon


    HAHA, Karate is for kids and sissies.
  • Asd (unregistered) in reply to brazzy
    brazzy:
    Anonymous:
    Wow there are so many "premature optimization" retards on this thread it is amazing. You guys have clearly never worked on java server software. Where I work we implemented the same optimization (without the wtf) and got a 10% improvement.


    The retard is you for thinking this justifies using the "optimization" always and everywhere. First, 10% performance improvement is not much at all - in most applications you can get a LOT more by optimizing the hotspots identified with a profiler. Second, you'd most likely have gotten 9,5% of that improvement by doing the change in only a handful of places (again, the hotspots identified with a profiler).

    Do you really think we didn't profile it? We are continually profiling and performance testing our software. And no it was a cumulative effect. 200 debugs per request * average 4 string concatenations = big performance problem.

    And 10% is a lot in a mature product. In most cases you are not going to be able to profile an app and find an easy 90% optimization. Your experience with toy apps might differ but when a product is a few years old 10% is a very good improvement for very little work. Really is if(logger.isDebug()) logger.debug("blah: " + someObj); so much worse than logger.debug("blah: " + someObj);

    Obviously it is not ideal but it is hardly Duff's device.

  • (cs) in reply to anon
    Anonymous:
    1. WTF does string concatenation incur a copy, even is the string is never used? Strings are immutable anyway, concatenation should be done lazily, and therefore be O(1).


    That would lose you the most important advantage of immutability, namely thread safety without synchronization.

    Anonymous:

    2. WTF is there no method that takes the log level and _more_than_one_ string as parameter? It would check the loglevel internally and _then_ concatenate if it needs to.


    Because if it's performance-relevant at all (which it most likely isn't), the concatenation is not the problem,
    creating the strings via some toString() method is. You also want lazy evaluation for method parameters?

    And all this for one specific programming idion that's not actually a problem most of the time...
  • Asd (unregistered) in reply to brazzy

    Actually now I think about it this could be very easily avoided if the log libraries supported printf syntax and you passed in objects instead of strings. Oh well.

  • (cs) in reply to Asd

    Anonymous:
    brazzy:
    Anonymous:
    Wow there are so many "premature optimization" retards on this thread it is amazing. You guys have clearly never worked on java server software. Where I work we implemented the same optimization (without the wtf) and got a 10% improvement.


    The retard is you for thinking this justifies using the "optimization" always and everywhere. First, 10% performance improvement is not much at all - in most applications you can get a LOT more by optimizing the hotspots identified with a profiler. Second, you'd most likely have gotten 9,5% of that improvement by doing the change in only a handful of places (again, the hotspots identified with a profiler).
    Do you really think we didn't profile it? We are continually profiling and performance testing our software. And no it was a cumulative effect. 200 debugs per request * average 4 string concatenations = big performance problem. And 10% is a lot in a mature product. In most cases you are not going to be able to profile an app and find an easy 90% optimization. Your experience with toy apps might differ but when a product is a few years old 10% is a very good improvement for very little work. Really is if(logger.isDebug()) logger.debug("blah: " + someObj); so much worse than logger.debug("blah: " + someObj); Obviously it is not ideal but it is hardly Duff's device.

    I notice that you conveniently skipped his second point which is the point that most of us that are against stupid "always do X" rules that muck up the code are trying to make.

  • Asd (unregistered) in reply to JohnO
    JohnO:
    I notice that you conveniently skipped his second point which is the point that most of us that are against stupid "always do X" rules that muck up the code are trying to make.
    I am not arguing against "always do x is stupid". I agree if you really need to give your developers fixed rules you are screwed anyway. My point is there was no reason to believe this was a premature optimization, it is a fairly common case for Java programs that this is useful.
  • (cs) in reply to JohnO
    JohnO:

    Anonymous:
    brazzy:
    Anonymous:
    Wow there are so many "premature optimization" retards on this thread it is amazing. You guys have clearly never worked on java server software. Where I work we implemented the same optimization (without the wtf) and got a 10% improvement.


    The retard is you for thinking this justifies using the "optimization" always and everywhere. First, 10% performance improvement is not much at all - in most applications you can get a LOT more by optimizing the hotspots identified with a profiler. Second, you'd most likely have gotten 9,5% of that improvement by doing the change in only a handful of places (again, the hotspots identified with a profiler).
    Do you really think we didn't profile it? We are continually profiling and performance testing our software. And no it was a cumulative effect. 200 debugs per request * average 4 string concatenations = big performance problem. And 10% is a lot in a mature product. In most cases you are not going to be able to profile an app and find an easy 90% optimization. Your experience with toy apps might differ but when a product is a few years old 10% is a very good improvement for very little work. Really is if(logger.isDebug()) logger.debug("blah: " + someObj); so much worse than logger.debug("blah: " + someObj); Obviously it is not ideal but it is hardly Duff's device.

    I notice that you conveniently skipped his second point which is the point that most of us that are against stupid "always do X" rules that muck up the code are trying to make.

    They only did the optimization after a few years, and only when they noticed they could get an edge from it. I could hardly call it "always do X".

  • (cs) in reply to Richard Nixon
    Richard Nixon:


    You should have held off a little longer; forever would have been a better timeframe.

    Listen, I'll spell it out for you as plain as I can.
    Again, I am assuming that the original assessment of Java is correct and it will be slow. (This is not something I agree with.) Java is the Camaro, to use your lame metaphor.

    Let's say the Camaro takes 10 minutes to get somewhere and the Corvette takes 10 seconds to get to the same point. Now, by doing some optimization to the Corvette and the Camaro, 5 seconds can be shaved off. That cuts the Corvette's time in half but reduces the Camaro's time by less than 1%.

    You see, there are choices as to where to use your employees' time. And I know what you're going to say: over time, the savings will add up by modifying the Camaro to be something substantial. This is a calculation that you have to do; consider the expected lifetime of the application, the other areas that could be addressed, and the improvement you can expect.



    Your optimization is wrong.    If you cut yoru trip time in half with a short route the corvette takes 5 seconds, while the camaro takes 5 minutes.  This is very significant to the camaro driver, but insignificant to the corvette (unless you make the trip often).

    So optimization is more important in slower languages.   

    Doubling the speed in the real world is common.   Taking 5 seconds off no amtter which language is rare if not impossible.
  • debugger (unregistered) in reply to cowardly dragon

    It's too bad they don't have compiler pragmas.

  • Asd (unregistered) in reply to debugger
    Anonymous:
    It's too bad they don't have compiler pragmas.
    Logging levels must be changable at runtime. You can't restart your live server to put on debug logging.
  • (cs) in reply to hank miller
    hank miller:
    Richard Nixon:


    You should have held off a little longer; forever would have been a better timeframe.

    Listen, I'll spell it out for you as plain as I can.
    Again, I am assuming that the original assessment of Java is correct and it will be slow. (This is not something I agree with.) Java is the Camaro, to use your lame metaphor.

    Let's say the Camaro takes 10 minutes to get somewhere and the Corvette takes 10 seconds to get to the same point. Now, by doing some optimization to the Corvette and the Camaro, 5 seconds can be shaved off. That cuts the Corvette's time in half but reduces the Camaro's time by less than 1%.

    You see, there are choices as to where to use your employees' time. And I know what you're going to say: over time, the savings will add up by modifying the Camaro to be something substantial. This is a calculation that you have to do; consider the expected lifetime of the application, the other areas that could be addressed, and the improvement you can expect.



    Your optimization is wrong.    If you cut yoru trip time in half with a short route the corvette takes 5 seconds, while the camaro takes 5 minutes.  This is very significant to the camaro driver, but insignificant to the corvette (unless you make the trip often).

    So optimization is more important in slower languages.   

    Doubling the speed in the real world is common.   Taking 5 seconds off no amtter which language is rare if not impossible.


    You dumb shit - learn to read. I said that if you can do an optimization, you take 5 seconds off the time for both the Corvette and Camaro. Wow - you are really stupid Hank.

    I can't believe I have to spell this out for you. In the case of the Corvette, you've gone from 10 seconds to 5 seconds. In the case of the Camaro, you've gone from 600 seconds to 595 seconds.

    LEARN TO READ MORON!

    Sincerely,

    Richard Nixon
  • (cs) in reply to Richard Nixon
    Richard Nixon:
    hank miller:
    [
    Doubling the speed in the real world is common.   Taking 5 seconds off no amtter which language is rare if not impossible.


    You dumb shit - learn to read. I said that if you can do an optimization, you take 5 seconds off the time for both the Corvette and Camaro. Wow - you are really stupid Hank.

    I can't believe I have to spell this out for you. In the case of the Corvette, you've gone from 10 seconds to 5 seconds. In the case of the Camaro, you've gone from 600 seconds to 595 seconds.

    LEARN TO READ MORON!


    I can read.  I rejected your entire argument because it is not realistic.   I can think of no optimization worth doing that will take cut the runtime of a fast program in half, but when applied to a slow program will cut less than a percentage. 

    If you can go from 10 seconds to 5 with the corvette, you have made a 50% imporvement.   You should be able to make the same imporvement with the camaro, which would take you from 600 seconds to 300!  The only way to cut travel time by that much is to find a more direct route, an optimization that applies to both.

    Real world optimization should focus on algorithms because that is where real gains are made.   

    Now if you had shaved .0001 seconds off the corrvette runtime I would accept that the equivelent optimization on the Camaro would shave nothing off the runtime.  (for instance replacing the aircleaner with a high flow version might help the Corvette, but not the Camaro) 

    Remember what Knuth says about optimiation though before you read too much into this.
  • (cs)

    I wanted to try and cap off this conversation with a summary of points, so maybe we wouldn't go off arguing again the next time a Java WTF is posted.  This is generally applicable to .Net as well.

    1. A wizard will always be able to make something run faster by dropping to a lower level language - Java->C++->C->Assembler->machine code.  These optimizations come at a cost of time for an extremely valuable developer, as well as additional maint. costs over the life of the application.  Because of declining marginal returns, it is usually not valuable to go down this road except for games, embedded apps, high end video processing, etc.
    2. A statically compiled C or C++ program can generally be made to run a little faster by carefully choosing compiler options to build the best possible code for a particular set of hardware.  Again, these optimizations come at a declining marginal return, and so most development shops will not optimize beyond choosing a broad class of CPU.
    3. Older JVMs, especially the first generation interpretters, were dog slow.  Modern JVMs are hybrids that interpret and perform dynamic profiling and compilation.  Modern JVMs also have garbage collection schemes that are leaps and bounds better than early JVMs.
    4. Java GUI programs and Applets have all sorts of issues.  We won't get into them here.
    5. Java programs tend to suffer from slow start-up times and slow initial runs.  After the initial profile data is built and the dynamic compiler kicks in, Java programs run much faster.  Java is generally more appropriate for long running server applications.
    6. A C++ program that has been carefully tuned should always outperform a similar Java program.
    7. A Java program will likely perform nearly as well, and occassionally better than, a similar C++ program if that C++ program has not been tuned very carefully.  The JVM provides a good amount of tuning automatically.
    8. Java also provides some benefits like better type checking and memory management which help improve reliability.
    9. For the right class of programs (long running server processes) in most development shops performance usually is close enough that it should not drive the choice between Java and C++.  Java generally offers better developer productivity and reliability, as well as portability.  C++ can frequently offer other advantages, particularly when integrating established code libraries.  Probably the final arbiter, however, is the pool of available talent.

  • (cs) in reply to Asd
    Anonymous:
    Do you really think we didn't profile it? We are continually profiling and performance testing our software. And no it was a cumulative effect. 200 debugs per request * average 4 string concatenations = big performance problem. And 10% is a lot in a mature product. In most cases you are not going to be able to profile an app and find an easy 90% optimization. Your experience with toy apps might differ but when a product is a few years old 10% is a very good improvement for very little work. Really is if(logger.isDebug()) logger.debug("blah: " + someObj); so much worse than logger.debug("blah: " + someObj); Obviously it is not ideal but it is hardly Duff's device.


    Are you unable to argue without name-calling? But have it your way:

    If your software is such a toy app that 800 string concats are 10% of what it does in a request or you are so incompetent that you need 200 debug statements to figure out what happened while handling a request that does hardly any work, then fine, it works out well for your. Going from there to "anybody who doesn't do the same thing is a moron and has clearly no experience" is... rather moronic.

    BTW, if only morons think doing it everywhere is a premature optimization, why were't you doing it right from the beginning? Why did you notice that it gave a whopping 10% boost only after a few years?
  • (cs) in reply to masklinn
    masklinn:

    After it starts up. Garbage collection is actually quite efficient, and if you add that to a JIT compiler (which the latest Sun JVM are) on a statically typed language (allows more easy optimisations than dynamically typed) and code that compiles well to native, you have performances that can reach at or beyond compiled C++ code level (usually not C level though, and there are languages that are faster than C, SML+the MLTon compiler often is for example).


    True, but when you add garbage collection (which you can do in C++), you loose the ability to do RAII.   (Well you can do it, but only to a limited extent)

    If you write C++ like it is 1995, then Java is a lot better.  C++ programers have learned a few things since then to make programming better (not to mention compilers had to learn how to do templates and the like, and things like the STL had to be written)  

    Either way, I prefer to work in python where performance doesn't matter, which is 99% of the time.   Sure my programs take 20 times longer to run, but for the most part they still finish as soon as you hit return, or failing that spend most of their time waiting on something else that even hand optimized assembly would be waiting on, and thus the same speed.

  • (cs) in reply to hank miller
    hank miller:

    I can read.  I rejected your entire argument because it is not realistic.   I can think of no optimization worth doing that will take cut the runtime of a fast program in half, but when applied to a slow program will cut less than a percentage. 


    Meanwhile, I have thought of a few. Basically, anything that takes a fixed time and happens only once per measurement would qualify. Such as JVM setup (Sun has done some improvement there in Java 1.5) or  reading in a config file for a command line utility where you measure one program run, or making a network call with big lag to an external system for aserver app where you measure the time per request.

    So it's not entirely unrealistic, but still far less common than improvements that are proportional to existing overall speed, I'd say.
  • (cs) in reply to RevMike
    RevMike:
    I wanted to try and cap off this conversation with a summary of points, so maybe we wouldn't go off arguing again the next time a Java WTF is posted.


    How... amusingly optimistic
  • Asd (unregistered) in reply to brazzy
    brazzy:

    Are you unable to argue without name-calling? But have it your way:

    If your software is such a toy app that 800 string concats are 10% of what it does in a request or you are so incompetent that you need 200 debug statements to figure out what happened while handling a request that does hardly any work, then fine, it works out well for your. Going from there to "anybody who doesn't do the same thing is a moron and has clearly no experience" is... rather moronic.

    BTW, if only morons think doing it everywhere is a premature optimization, why were't you doing it right from the beginning? Why did you notice that it gave a whopping 10% boost only after a few years?


    Ok, please just ignore the name calling. I was a little annoyed, sorry about that.

    So the performance equivalent of 8000 string concats for a request is a toy app? Anyway as you said yourself (and I oversimplified) most of the work for these debug calls is going to be in toStrings. So what appears to be 4 concats is really much more work. Anyway the level of logging, while it may seem excessive to you, is actually considerably less than i have seen in many enterprise apps.
    Why wasn't it being done from the beginning? I didn't work here :P. Really it wouldn't have been 10% originally but as the core logic was optimized it became a larger factor. Having learned from this case I would expect most server type apps to benefit from this optimization and wouldn't hesitate to use it for apps with frequent debug logging. For other apps I wouldn't bother (e.g. gui apps, anything where performance wasn't hugely important).

    Again I never said it was suitable for all cases! Please stop attacking a strawman and consider your own posts.
    But I stand by my statement that those who considered it a premature optimization and assumed the developer never profiled the software without any evidence for those beliefs are... well not the most insightful in this case.

    Did the original post say it was a law in that company? No. Was there any reason to believe the developer had not profiled his app? No. Has this optimization proved useful in other similar situations? Yes. Assuming that it was a premature optimization is completely unjustified and demonstrated a lack of familiarity with server class software written in Java.


  • Me (unregistered) in reply to csecord

    Thanks for posting the correct answer so early.  If this is true, the code reviewer is every bit as bad as the person writing the code.  Either that or the person writing the code is smarter and the response is a joke at the reviewers expense...

  • Asd (unregistered) in reply to Me

    As an example of this being considered useful here is some code from Tomcat:

                if (log.isDebugEnabled()) {
                    log.debug(Localizer.getMessage("jsp.error.compiler"), t);
                }

  • Asd (unregistered) in reply to Asd

    From jetty:

    if(log.isDebugEnabled())
                log.debug("parsing: sid="+source.getSystemId()+",pid="+source.getPublicId());

  • (cs) in reply to Asd
    Anonymous:
    As an example of this being considered useful here is some code from Tomcat:

                if (log.isDebugEnabled()) {
                    log.debug(Localizer.getMessage("jsp.error.compiler"), t);
                }


    One of the things that is lost on non-Java developers is that this is a standard idiom.  Part of the reason that the code reviewer rejected the original code is that it deviated from the idiom for no justifiable reason.  It is "easy to read" because it appears like this in virtually every application that used log4j or the apache commons logging framework.  It is faster because the framework has been designed in such a way that this will be faster.  It is not a premature optimization because this is the way it is supposed to be done in the first place.
  • (cs) in reply to brazzy
    brazzy:
    RevMike:
    I wanted to try and cap off this conversation with a summary of points, so maybe we wouldn't go off arguing again the next time a Java WTF is posted.


    How... amusingly optimistic


    I have to dream the dream.
  • (cs) in reply to Enric Naval
    Enric Naval:
    JohnO:

    Anonymous:
    brazzy:
    Anonymous:
    Wow there are so many "premature optimization" retards on this thread it is amazing. You guys have clearly never worked on java server software. Where I work we implemented the same optimization (without the wtf) and got a 10% improvement.


    The retard is you for thinking this justifies using the "optimization" always and everywhere. First, 10% performance improvement is not much at all - in most applications you can get a LOT more by optimizing the hotspots identified with a profiler. Second, you'd most likely have gotten 9,5% of that improvement by doing the change in only a handful of places (again, the hotspots identified with a profiler).
    Do you really think we didn't profile it? We are continually profiling and performance testing our software. And no it was a cumulative effect. 200 debugs per request * average 4 string concatenations = big performance problem. And 10% is a lot in a mature product. In most cases you are not going to be able to profile an app and find an easy 90% optimization. Your experience with toy apps might differ but when a product is a few years old 10% is a very good improvement for very little work. Really is if(logger.isDebug()) logger.debug("blah: " + someObj); so much worse than logger.debug("blah: " + someObj); Obviously it is not ideal but it is hardly Duff's device.

    I notice that you conveniently skipped his second point which is the point that most of us that are against stupid "always do X" rules that muck up the code are trying to make.

    They only did the optimization after a few years, and only when they noticed they could get an edge from it. I could hardly call it "always do X".

    Please read before posting: "Second, you'd most likely have gotten 9,5% of that improvement by doing the change in only a handful of places"

    You are talking about time I was talking about location.

  • JBange (unregistered) in reply to OneFactor
    OneFactor:

    Anonymous:

    So, no shennanigans here, I've seen projects where such wrapping is policy, for every log statement at every log level. It might cost slightly more to create the code (Eclipse to the rescue, however), but it's far less expensive as far as runtime performance and maintaince costs go.

    This might disturb you, but it's one of the differences between industrial code and artisanal code.

    Alex, can we get a button which creates a post that quotes Donald Knuth, says that premature optimization is the root of all evil, and insists that performance tuning needs to be done in conjunction with a profiler rather than half-baked "coding standards"?



    There's a distinct difference between what Knuth calls "premature optimization" and programming sensibly. Pissing away clocks logging debug info isn't sensible, and waiting for a profiler to tell you obvious things like this is doubly insensible.
  • (cs) in reply to JBange
    Anonymous:
    OneFactor:

    Anonymous:

    So, no shennanigans here, I've seen projects where such wrapping is policy, for every log statement at every log level. It might cost slightly more to create the code (Eclipse to the rescue, however), but it's far less expensive as far as runtime performance and maintaince costs go.

    This might disturb you, but it's one of the differences between industrial code and artisanal code.

    Alex, can we get a button which creates a post that quotes Donald Knuth, says that premature optimization is the root of all evil, and insists that performance tuning needs to be done in conjunction with a profiler rather than half-baked "coding standards"?



    There's a distinct difference between what Knuth calls "premature optimization" and programming sensibly. Pissing away clocks logging debug info isn't sensible, and waiting for a profiler to tell you obvious things like this is doubly insensible.

    It all depends on your problem domain.  If you are always writing OS kernel code, then I might start to agree with what you said.  Very many problem domains are not CPU sensitive.  What you call "pissing away clocks" I call focusing on the real task at hand.  If you look back 20 years, the pissing away clocks argument keeps changing.  As hardware and software get more powerful, we are spending less and less time worrying about "pissing away clocks" and more time worrying about getting things developed quickly and correctly.  What you  think is pissing away clocks I think is pissing away developer time. 

    I think requiring the if isdebug in all situations is insensible.  I think this is exactly what Knuth is talking about.

  • Anon (unregistered) in reply to brazzy
    brazzy:
    Anonymous:
    Could, ANYONE, show me an desktop application written in java that isn't a memory hog and runs at an acceptable speed??


    Try Puzzle Pirates.


    I'm going to repeat the first Anon's request here.
  • Cheerybounce (unregistered) in reply to Asd
    Anonymous:
    Wow there are so many "premature optimization" retards on this thread it is amazing. You guys have clearly never worked on java server software. Where I work we implemented the same optimization (without the wtf) and got a 10% improvement.

    When someone says performance critical Java software it is pretty safe to assume they are talking server software and in any server software there will probably be thousands of debug statements, many of which, as has been said before, will be calling expensive toString methods. You would have to be a moron to just assume that it is a premature optimization.

    For all they people that said they would never work in a place with that rule, don't worry you are never going to get a programming job but will be working as a linux sysadmin, thinking you are a great h@x0r cos you wrote some perl scripts, for the rest of your days.



    I wouldn't say they are retards, but I would say they does have a wrong viewpoint.
    Yeah,*-)nice amount of speed increase, but you forget one thing...  You are using damn much precious time writing those one line extra per debug log message more! And java doesn't even supply to tools to help with it.:@

    In other words, you people are just changing speed into readability and time by doing so. Premature optimization is evil indeed, but if you guys want to do that, you don't have any other way around. I'm happy I don't have that problem anymore. 8-|

  • Orinocohol (unregistered) in reply to RevMike
    RevMike:
    antareus:
    Java may have adequete performance, but anytime I use a Java desktop app I groan to myself. The UI insists on looking different and acting funny (not all access keys are implemented, whereas with native controls you get them for free) for no reason. Then there's the GC being lazy about releasing chunks of memory, to the point where people tell me I shouldn't leave Eclipse open after I go home from work because it takes longer to un-minimize the sucker than to simply launch it again the next day.


    I'm not a fan of Eclipse myself.  And I have to admit that most Java GUIs aren't too hot.  I develop middleware applications, so it isn't an issue for me.  I ought to download Eclipse some day and see if I can tune the JVM to make the GUI perform respectably.

    Anyway, real men don't use fancy schmancy IDEs.  Real men use vim to code and to write their ant scripts, then run the whole thing from the command line.

    You pansy! Y'all do know that vim stands for, "vi IMPROVED," don'tcha? Why, I'll bet thasseveneasier to use than standurd vi!

    ...real men edit their text files with cat and echo. ;)
  • (cs) in reply to JohnO
    JohnO:
    Enric Naval:
    JohnO:

    Anonymous:
    brazzy:
    Anonymous:
    Wow there are so many "premature optimization" retards on this thread it is amazing. You guys have clearly never worked on java server software. Where I work we implemented the same optimization (without the wtf) and got a 10% improvement.


    The retard is you for thinking this justifies using the "optimization" always and everywhere. First, 10% performance improvement is not much at all - in most applications you can get a LOT more by optimizing the hotspots identified with a profiler. Second, you'd most likely have gotten 9,5% of that improvement by doing the change in only a handful of places (again, the hotspots identified with a profiler).
    Do you really think we didn't profile it? We are continually profiling and performance testing our software. And no it was a cumulative effect. 200 debugs per request * average 4 string concatenations = big performance problem. And 10% is a lot in a mature product. In most cases you are not going to be able to profile an app and find an easy 90% optimization. Your experience with toy apps might differ but when a product is a few years old 10% is a very good improvement for very little work. Really is if(logger.isDebug()) logger.debug("blah: " + someObj); so much worse than logger.debug("blah: " + someObj); Obviously it is not ideal but it is hardly Duff's device.

    I notice that you conveniently skipped his second point which is the point that most of us that are against stupid "always do X" rules that muck up the code are trying to make.

    They only did the optimization after a few years, and only when they noticed they could get an edge from it. I could hardly call it "always do X".

    Please read before posting: "Second, you'd most likely have gotten 9,5% of that improvement by doing the change in only a handful of places"

    You are talking about time I was talking about location.

    Hum, did you read HIS post? he says is was a cumulative effect, so it is not detectable with a profiler :)

    You're right about not reading, I was confused with a post from Brazzy that said " why were't you doing it right from the beginning? Why did you notice that it gave a whopping 10% boost only after a few years?"

  • b0b0b0b (unregistered)

    I had to read that about 10 times before I realized the poster meant performance and not perforce (source control).

     

  • Shawn B. (unregistered) in reply to RevMike

    Actually, you must have misunderstood me because I was saying something similar:

    public void LogDebug(string message)
    {
        if (DEBUG != true)
            return;

       // do logging
    }

    Thanks,

    Shawn

  • (cs) in reply to cupofT

  • (cs) in reply to brazzy
    brazzy:
    foxyshadis:
    In short, Java got a great performance boost by copying .Net. Cute. <sigh>
    </sigh>


    Youz are an incompetent, ignorant, idiotic Microsoft whore.

    Sorry, but a statement that wrong deserves nothing but  abuse.

    .Net copied Java, NOT the other way round.


    Sorry, but I'm no one's shill. Some references I checked mixed up the versions when Java gained generational garbage collection (it was 1.3.1, not 1.4.1), so it was essentially a dead heat, although Java's was far less performant than .Net's until later 1.4 and 1.5 updates.

    That doesn't make you any less of a twit for coming in blazing when my post concerned only one feature, which was not in fact copied from Java.
  • (cs) in reply to Richard Nixon

    Richard Nixon:
    Mung Kee:
    Alexis de Torquemada:
    Mung Kee:
    Hmmm, Alexis, I never would have taken you for a conspiracy theorist.  See my earlier post....I have yet to see a benchmark that wasn't slanted one way or the other.  What's worse is the morons here, most of which likely have never read one, passing judgement regardless of whether they know anything about the "other language".


    I've read and performed some benchmarks comparing various C++ and Java implementations, and in most of them, C++ was slightly to dramatically faster (though D won some ahead of both others) even when taking JVM startup time into account. The others were typically biased against C++ by means of utter cluelessness (e.g. strdup'ing a string only to discard the result), whether malicious or not. There are numerous theoretical reasons why C++ cannot be slower than Java in certain respects, not the least of which being that everything the JVM does can be done in C++ as well. Java implementations have become surprisingly fast given that at one time they executed many programs dozens of times slower than equivalent C++ programs, but Java still is almost never faster than a good C++ program run through a state of the art compiler, as some people like to claim (on this board and elsewhere). It's simply a myth. There are many good reasons to use Java for long-running applications, and even some short-running applications if you're smart enough to use GCJ for that purpose instead of that JVM bloatware. But, sorry, "because it's faster than C++" just isn't among. BTW, I've used both languages extensively, in numerous implementations, so it must be someone else you are talking about in the last sentence.


    For the record, I wasn't grouping you with the morons.  It was a general statement.


    You're goddamned right you weren't. Otherwise he will bust your face in! He knows karater after all.

    Sincerely,

    Richard Nixon

     

    The first time I read this I thought he said bust in your face ... [image]

  • Z (unregistered) in reply to brazzy
    brazzy:
    Z:

    Furthermore, the Java specification is really bad, although they fixed some issues with the new memory-model in Java 1.5. The basic problem is still that Java prescribes a shared-memory model for threaded applications which puts an inherent limit on the scalability of the applications, as well as imposing an inferior model on the programmer.


    I rather doubt that many people would agree that shared-memory multithreading is an "inferior model". Any other model would put severe limitations on what you can do with threads. I mean, it sounds impressive that Erlang can spawn threads like wildfire, but not so much when you realize they can't do many of the things people have come to expect from threads. It may be that you can adjust expectations and programming techniques to live with the limitations and profit greatly from the advantages, but I don't think that amounts to a fundamental superiority.


    Ok, maybe I was overly argumentative/dismissive. Sorry for that.

    However, it is not the case that you can't do things in a threaded environment if you don't have shared memory, it just requires a different style of programming. For the Erlang example, the very reliableand scalable Ericsson telecomm switches are a good representative of the fact that it is a viable platform for large, performance-sensitive, long-running applications. Also, there is a nice, distributed database for Erlang, called Mnesia. For some interesting discussion about reliable computing, see Joe Armstrongs PhD-thesis (Armstrong is one of the designers of Erlang).

    Another nice example is DragonflyBSD. Although it is not finished yet, it does work, and the infrastructure is in place. The kernel in this OS is based on very simple light-weight threads that communicate through messages.

    My point is that the advatnages of rethinking in message-passing, over using a shared memory, is so big in architectural complexity-reduction, that there are very few places where you would want shared-memory. Viewing shared memory as the "dangerous feature" of threading, akin to goto's of control-structures, is quite a good analogy, IMHO.
  • zootm (unregistered) in reply to cupofT
    cupofT:
    If performance is critical why are they using java?[:S]

    Because Java performs well, and is reliable.

    Duh.
  • (cs) in reply to brazzy
    brazzy:

      you are so incompetent that you need 200 debug statements to figure out what happened while handling a request that does hardly any work, then fine, it works out well for your. Going from there to "anybody who doesn't do the same thing is a moron and has clearly no experience" is... rather moronic.



    I'd much prefer 199 excess debug statements in my logfile than no debug statements logged at all because most things don't fail.    I've spend a lot of time trying to figure out why some program failed at a customer site.  

    Wading through thousands of lines in a log file is boring, but at least when the customer has problems (I am not a perfect coder, and my co-workers are not either) I can figure out what was going on. 

    The only problem I have with 200 debugs for a simple transaction is if it happens often enough that these logs obscure (perhaps because of log file rotation) some other problem elsewhere, or they make the program slower than acceptable.

    Logging is good.  

    I have had to tell customers too many times that I have no idea why their job failed.    So I keep adding more and more logging until I can figure out what is wrong.

    I have seen log errors that work out to something like "Error, 2+2 == 5".    Hardware is not always initialized when someone says it is.  So I reject all arguments that can't-happen cases should not be checked for.  (though I would make the checks)
  • eswdd (unregistered) in reply to Clock Man

    You obviously haven't been involved in many large server side applications where you could easily have a few thousand clients calling the code every minute!

  • Josh (unregistered)

    Looks like the dust is finally starting to settle on this... wow, I've been reading this site for quite a while and I've never seen such a flamewar (although the debate on the "interview WTF's" a few weeks ago was close).  I'm the Josh in the post, and I stand by my review suggestion, even in the face of the debate that's raged here for the last couple of days.  I agree with Knuth's statement that "premature optimization is the root of all evil", but there's nothing premature about this optimization - it's actually kind of a non-issue in Java circles.  Or, were you really implying that we should re-check this standard optimization for every program we write and then go back and re-apply it after the profiler (inevitably) tells us that, yes, this is a good optimization?  Yikes - talk about wasting programmer time.

    Although string concatenation can be a time waster (from "Effective Java", item 33 - "Using the string concatenation operator repeatedly to concatenate n strings requires time quadratic in n") my biggest concern at the time was memory usage; this logging pattern creates an awful lot of objects that need to be GC'ed (this was on an older JDK, though - this WTF occured about 5 years ago).

    'Twas a fun flamewar, though.  Wait until tomorrow when I submit the WTF about the guy who put his curly braces on the same line as the if statement instead of on the next line.

  • (cs) in reply to foxyshadis

    <font size="4"><font size="3">

    foxyshadis:
    Some references I checked mixed up the versions when Java gained generational garbage collection (it was 1.3.1, not 1.4.1), so it was essentially a dead heat, although Java's was far less performant than .Net's until later 1.4 and 1.5 updates.



    Still very wrong. The first release of .NET 1.0 was in early 2002, as was Java's 1.4, whereas the first ocurrence of generational garbage collection in Java was in 1.2.2 (released mid-1999) - there was an -Xgenconfig command line option to configure it. At that time, .NET hadn't even been announced yet.</font>
    </font><font size="4"></font><font size="4"></font>
  • (cs) in reply to Josh
    Josh:

    'Twas a fun flamewar, though.  Wait until tomorrow when I submit the WTF about the guy who put his curly braces on the next line instead of on the same line as the if statement.


    IFYPFY.  No need to thank me.
  • cactus (unregistered) in reply to StarLite

    Who knows because we don't see all the code here, but if this code is using a current logging package you wouldn't necessarily see that at all. So in a production environment you'd set your logging to error only or something and you'd see none of these messages in your log

  • Josh (unregistered) in reply to RevMike

    Begun this flame war has.

  • cowardly dragon (unregistered) in reply to indeed

    That's why I do this:

    /DBG/if(lg.don)lg.dbg("A log message that doubles as a comment");

    With most syntactic coloring, the green /DBG/ helps easily distingish log/comments (and sufficent debugs are often all you need to comment code) and reduce the clutter.

    To execute a test to see if debug is on is as close to the atomic, smallest computational unit, which is why I say O(1) not O(c), and when comparing the dbg-on operation to string concatenations, which should require at least one pass through the strings and objects, plus object instantiation overhead. I don't know why I said O(2n), that's probably wrong, maybe I said that to account for additional conversions and instantiation. IIRC, big-O is a relative measure, not an absolute one, although it is obviously used to guess at absolute execution time once the hardware parameters are known.  The relative nature is necessary considering how fast computers have improved in the history of compsci.

    I think O(1) refers to a single-number operation, such as multiplying two numbers, assigning a pointer, or checking a numeric value/flag.

  • cowardly dragon (unregistered) in reply to Chachky

    I make my beans in java. But...

    The dynamic recompiling bytecode interpreters seem to eventually  produce native translations of bytecode that will run faster than the output of many C++ compilers. But it still doesn't address java's woeful track record with memory hogging, and that it takes a few run-throughs of the code to reach the optimization sweet spot. In server code that isn't (theoretically) reloaded all that often, this optimization will occur. In desktop apps where someone loads up an app, does a quick operation, and closes it two minutes later, that isn't true, and java sucks at that.

    And considering how long it still takes to run an applet in a browser (I just upgraded to JDK1.5, and that seems to slow the applet load in Firefox to a 10-minute startup crawl) which is just ludicrous when you look at what Macromedia can do with flash, and java will never kick it's bad-performance rep. I mean, seriously: ten minutes to do a basic file selection applet. Ridiculous.

    Finally, keep in mind that different programming languages don't really have fundamental advantages in speed, it comes down to how good the compiler is at generating optimized machine code. Although some languages must be easier to write fast assembly for (C/C++) than others (LISP), but the fact that "interpreted" languages can be brought on par with compiled code via optimizing runtime engines is a significant, positive development in programming technology

  • cowardly dragon (unregistered) in reply to frosty

    It's runtime interpreter caches what the machine code compiles java's bytecode, and supposedly profiles its execution to determine how to run it more quickly.

    I don't know this for a fact, but I could guess that it identifies the most frequently executed loops and does memory-expensive but performance-boosting machine code generation for that stuff, but the rest of the program is more conventionally cached.

    Since the runtime interpreter can use real-world, practical results to optimize code (and can recompile parts on the fly), it should theoretcially be better than a compiler, which had to make its assumptions at compile time without any sort of real usage analysis.

  • (cs) in reply to cowardly dragon
    Anonymous:
    Since the runtime interpreter can use real-world, practical results to optimize code (and can recompile parts on the fly), it should theoretcially be better than a compiler, which had to make its assumptions at compile time without any sort of real usage analysis.


    I have been arguing the "Java performs well" side of this argument.  I'm a java fan, not a java hater.

    Many static compilers can use profiling data to improve the optimizations.  The cycle is compile-profile-recompile.  So C++ developers can, with not insignificant effort, do this as well.  I suspect that most don't unless they are devleoping for an embedded app where every cycle counts.

    The plus for Java here is not that Java can do this, but that this happens automatically without any additional developer effort.
  • (cs) in reply to RevMike
    RevMike:
    A wizard will always be able to make something run faster by dropping to a lower level language - Java->C++->C->Assembler->machine code. 

    Please provide an example where machine code can be used to write a faster program than assembly, assuming that the assembler has all available instructions.

  • Ptx (unregistered) in reply to Alexis de Torquemada
    Alexis de Torquemada:
    Anonymous:


    You do realize of course that there are numerous benchmarks these days that are showing Java as FASTER than C/C++, right?


    You do realize that they are either extremely narrow in scope or written by people who are absolutely clueless about C++ and who have a grudge to settle?



    Sorry for being late to the flamewar and all, I'll try to be on time next time. Until then, read this, and since the inevitable objections are bound to come up, be certain to read this as well.

    I'm not saying that these benchmarks are the final word on the matter, but I challenge you to explain what kind of benchmarks you'd like to see that aren't included there, or to demonstrate that Lewis and Neumann are "absolutely clueless about C++".

Leave a comment on “Squishin' de Bugs”

Log In or post as a guest

Replying to comment #:

« Return to Article