• Anonymous Coward (unregistered)

    OK, I feel dumber than the original coder. 

    I just looked up "perforce" in the dictionary and had no idea why that was critical.

    Then I realized it was performance that was critical.

    Did someone do a search/replace on "man" before submitting the article?  [8-|]

  • Xaria (unregistered) in reply to cupofT

    Because Java isn't as bad as people make out, it just takes a lot longer to start up. Plenty of big companies use Java for projects that need to be FAST and reliable. I would name one, but I suspect I'm not allowed to ;) 200+ transactions a second, and I'm not talking "insert into" type transactions, but a lot more complex than that.

    Java's problem is that it is easy to write code with poor performance. At least you're a lot less likely to have major memory leaks.

  • (cs) in reply to rsynnott
    rsynnott:
    RevMike:
    A wizard will always be able to make something run faster by dropping to a lower level language - Java->C++->C->Assembler->machine code. 

    Please provide an example where machine code can be used to write a faster program than assembly, assuming that the assembler has all available instructions.



    OK, you got me on that one.  Macro Assemblers will automagically provide things for us, sometimes in less than perfect ways.  Doing true assembler without those crutches is indistinguishable from writing machine code.

  • (cs) in reply to cowardly dragon
    Anonymous:

    To execute a test to see if debug is on is as close to the atomic, smallest computational unit, which is why I say O(1) not O(c), and when comparing the dbg-on operation to string concatenations, which should require at least one pass through the strings and objects, plus object instantiation overhead. I don't know why I said O(2n), that's probably wrong, maybe I said that to account for additional conversions and instantiation. IIRC, big-O is a relative measure, not an absolute one, although it is obviously used to guess at absolute execution time once the hardware parameters are known.  The relative nature is necessary considering how fast computers have improved in the history of compsci.

    I think O(1) refers to a single-number operation, such as multiplying two numbers, assigning a pointer, or checking a numeric value/flag.


    You still haven't understood the point of big-O notation, which is to disregard ANY constant factor, be it from hardware or from less-than-optimal coding. Thus, O(2n) and O(n) are identical. You use it to judge how well algorithms perform relative to their input size, because that eventually dominates any constant factor for large enough inputs, and that's where a better algorithm becomes REALLY important. And that's why it gets often talked about in discussions of optimizations, because "pessimizations" like the one here are really, really irrelevant when you're running an O(n^2) algorithm where you should use one that is O(n*log(n)).

    In this particular case, string concatenation is indeed O(n) relative to the length of the strings unless the strings are implemented as some sort of linked list, but since the length of the strings that get logged will usually not be proportional to the input, and since you're going to do something that's at least O(n) relative to the input ANYWAY (such as reading it in...), it's totally irrelevant as far os big-O notation is concerned.

  • (cs) in reply to cowardly dragon
    Anonymous:
    The dynamic recompiling bytecode interpreters seem to _eventually_  produce native translations of bytecode that will run faster than the output of many C++ compilers. But it still doesn't address java's woeful track record with memory hogging, and that it takes a few run-throughs of the code to reach the optimization sweet spot. In server code that isn't (theoretically) reloaded all that often, this optimization will occur. In desktop apps where someone loads up an app, does a quick operation, and closes it two minutes later, that isn't true, and java sucks at that.


    Two minutes is PLENTY of time to find and optimize the hotspots of your application where it spends 80% of its time in. Heck, two SECONDS is usually enough. The class loading itself (reading in, parsing and verifying the bytecode) usually takes more time.

    Anonymous:
    And considering how long it still takes to run an applet in a browser (I just upgraded to JDK1.5, and that seems to slow the applet load in Firefox to a 10-minute startup crawl) which is just ludicrous when you look at what Macromedia can do with flash, and java will never kick it's bad-performance rep. I mean, seriously: ten minutes to do a basic file selection applet. Ridiculous.

    Bullshit. Either that applet is programmed incredibly crappy, or your system is seriously fucked up. It most definitely has nothing to do with Java. It does NOT take anywhere near 10 minutes to start up an applet, especially not in Java 1.5. A common cause for extreme delays like that (and not only with Java) are badly-written virus scanners that do on-access scanning of the entire Java class library.


  • (cs) in reply to Richard Nixon
    Richard Nixon:
    Wow. That was clever. What a zinger! You really got me there.

    Hey, did you happen to fight anyone with your karate skills today?


    Come on - it can't be much dumber than reminding people of past incidences where you publicly displayed your abysmal judgement, can it?

  • (cs) in reply to emptyset
    emptyset:

    <font face="Courier New" size="2">anyone got a sewing machine?  because alexis is RIPPED!</font>



    I picture you as pretty athletic as well, doing farm work all day. You know - forming bales of hay, carrying fertilizer bags, shearing sheep, humping dead cows ...

  • Chris (unregistered) in reply to frosty

    I'm genuinly intrigued.  With all that C/C++ has going for it, how could Java pull off being faster (I assume you mean after the jvm starts up, or is that included)?


    It turns out that, since the Java VM is calling the shots on just about everything, it can perform some too-scary-for-C/C++ dirty tricks. For example, Java might be able to allocate memory faster than C/C++ using reasonable implementations of malloc (see http://www-128.ibm.com/developerworks/java/library/j-jtp09275.html?ca=dgr-lnxw01JavaUrbanLegends).

    If Java is run in interpreted mode, it tends to run slower than anyone can reasonably stand. Fortunately, the most popular platrofms feature JIT compilers that compile to native code. They're pretty speedy... really.

    If you compare comparable implementations of the same code in Java versus C++, I think you're going to find that they are about the same. The JVM takes some time to startup (maybe a second or two on a slow machine) and the JIT process itself takes time to perform, of course (JIT compileation is usually performed either the first or second time a code path is executed). Once that hit is taken, the code actually executing on the processor should be about the same... the JIT generates the same opcodes to perform arithmetic ops, play with the stack for method calls, and chase pointers all through memory.

    Memory access in Java is a bit slower, since objects must be "pinned" when you're going to mutate them (otherwise, the mem manager could move that bit of memory around during a sensitive operation). However, pinning itself is pretty much trivial (bit flicking) and it's really the garbage collector that has to deal with the problem of moving pinned regions of memory.

    What I find ends up being the biggest problem (and most notable for many Java haters -- for performance reasons) is the poor GUI API compared to how things are implemented in the OS. For example, Java has a very rich API for some very nice GUI constructions (Swing). The problem is that swing is implemented all in Java, which means that it does all its own widget drawing, etc. That means that the only stuff that the windowing system will do is draw top-level windows, and some primitives like lines and stuff. IBM (I think) has the SWT, which is actually pretty darned cool and implemented using native windowing-system widgets, but it's not exactly a standard (at least not yet) and not a drop-in replacement for AWT or Swing. :(

    So, say what you will about Java. Most of the complaints are based on uninformed conjecture and outdated evidence ("wah! Java sucked in 1995 when I ran it in Netscape that one time... it must suck!"). Those of us who know it gets the job done -- very well, in fact -- will continue to use it as a very productive application environment.

    -chris

  • (cs) in reply to Mung Kee
    Mung Kee:

    HAHA, Karate is for kids and sissies.


    Do you mean on the giving or the receiving end?

  • (cs) in reply to Justin.
    Anonymous:
    Initial start up is faster for C/C++ but runtime allocation of memory is faster in (recent) Java.  Thus for long running applications, Java wins.

    Justin.


    Thanks for this extremely simple-minded and thus utterly uninformative post. What's memory allocation in C++ anyway? Operator new? Stack-based allocation? Aggregation of members (which is obviously at zero cost something that Java can never top)? Custom-made allocators? Which implementations of C++ and Java are you talking about?

  • (cs) in reply to RevMike
    RevMike:
    Java also provides some benefits like better type checking and memory management which help improve reliability.


    I agree with most of your post, but not with this point. Java has more strict dynamic type checking, while C++ allows for stricter static type checking. Generics solve the gravest disadvantage of Java in this regard (it used to lack type-safe containers), but not all of them.

  • (cs) in reply to Alexis de Torquemada
    Alexis de Torquemada:
    Richard Nixon:
    Wow. That was clever. What a zinger! You really got me there.

    Hey, did you happen to fight anyone with your karate skills today?


    Come on - it can't be much dumber than reminding people of past incidences where you publicly displayed your abysmal judgement, can it?



    Do you have a theme song that gets played whenever you warm up to karate-chop the UPS guy?

    Sincerely,

    Richard Nixon
  • (cs) in reply to Alexis de Torquemada

    Alexis de Torquemada:
    I picture you as pretty athletic as well

    <FONT face="Courier New" size=2>maybe we can go to a nude beach sometime and show off our six-packs.  where did you get the crazy idea i work on a farm?</FONT>

  • Anon (unregistered) in reply to emptyset

    Hey Alexis: Waaaaa!!!

    hahahahaha

  • Kuba Ober (unregistered) in reply to hoens

    To delete objects in java it costs (from what I've heard) about 0 system calls since the JVM handles the memory, whereas deleting in C++ takes significantly more since you're married to the hardware.

    IIRC on most unixish C libraries, the virtual memory is only allocated (by malloc() invoking [s]brk()) every once in a while, but free() nominally doesn't release any of that. If some VM gets unused, it just gets swapped out. IIRC as well free() may do munmap() if a "big" block is being freed, but that big blocks we're talking about.

    On decent implementations, deleting in C++ touches about as much hardware as say zeroing a 512 byte array. By that I mean how much data memory the underlying implementation has to touch in order to update all the block lists and other housekeeping stuff (I exclude the code memory, obviously).

    I'd suggest not only actually doing some research, but not propagating hearsay that likely originated with people whose code ends up on WTF. Sigh :(

    Cheers, Kuba

  • Brian (unregistered)

    Arguably the real WTF here is the fact that in Java, every call to logDebug really does have to be changed to produce the desired speedup. This exact code change was discussed in the great Stevey's Drunken Blog Rants post Scheming is Believing. It's a long page; search for "you can't write logging, tracing, or debugging statements intelligently in Java". His conclusion: "The debug-flag problem is an evaluation-time problem. You can fix problems like this either by adding support for lazy evaluation to your language, or by adding a macro system."

    This article will also shed some light on why Josh requested the change in the first place, which seems to have confused some commenters.

Leave a comment on “Squishin' de Bugs”

Log In or post as a guest

Replying to comment #:

« Return to Article