• (cs) in reply to King Kong
    King Kong:
    It comes and it goes. The 8086 was microcoded: the 80186 was 10 times faster at the same clock speed because the microcode had been reduced to silicon. The 80286 was then 2 times faster because the clock speed was increased.
    A significant fraction of the 80186's instruction set was microcoded. The difference was that the microcode support for certain sub-operations was simplified and logickized, speeding up some common addressing modes. I'll accept, however, that "10 times faster" is hyperbole, partly because if it were true, the 80186 would have been the first x86-line processor with a superscalar architecture (instructions per clock > 1). And it wasn't.

    And don't forget that the execution time of most instructions was dominated by their fetch time, even on the 16-bit bus versions. (A reg-reg operation takes 3 or so clock cycles to execute, and potentially 6-8 clock cycles to fetch. On an 8088 or 80188, it is guaranteed to take two memory cycles at 3-4 clock cycles each, while if you are lucky on an 8086 or 80186, it can take only one memory cycle.)

  • Casey (unregistered)

    I remember when EMC "upgraded" their library management software to Java, and it was so slow compared to their earlier version to be almost unusable.

    Java was and still is sloooooow, but now PCs are just so much faster that it doesn't matter. Remember, this WTF was back in the late 90's.

  • RV (unregistered)

    "Dick had of predictably blamed Java." Is this a classic typo?

  • Davor (unregistered)

    This being the late 90's, I can see where Dick is coming from. Java as used by consumers, mostly in their browser as horrendous applets, wás terrible. I don't know it worked as a business application though.

    But still to this day, after suffering so many problems with Java, its updater and what not, the word Java causes an immediate :-S reaction.

    Actually using it just isn't user friendly at all. With Visual Whatever it's: Compile, doubleclick exe, it runs. With Java it's: locate an actual runtime that will run it, feed it the program as a parameter and hope it runs. Of course, having many JRE's, JDK's and what not installed on the same computer doesn't help.

  • QJo (unregistered) in reply to EmperorOfCanada
    EmperorOfCanada:
    The main problem that I have with Java is that for some reason the users of Java seem to think that it is a good idea to create 800 classes for every problem. OOP is great but it is not the only design pattern/philosophy in the book.

    The result is not Spaghetti code but a spaghetti architecture.

    Now I know what I've been doing wrong, I only used 799 classes for the last project I built.

    Your argument is an instance of the one that goes: "I heard somewhere that people who don't fully understand how to use a particular programming technique sometimes produce designs that appear on the surface flawed to my own limited understanding of that software technique - therefore it is always bad and you should always use this particular technique that I learned when I turned up for the first few lectures at the "how to program for beginners" evening class course at the local infants' school."

  • Najzero (unregistered)

    Knock Knock. Who's there? ... ... ... ...

    Java.

  • chris (unregistered) in reply to Copypasta
    Copypasta:
    Java's syntax is so clean, it's aseptic. It makes up for this by requiring so much boilerplate to accomplish any non-trivial task that one wonders if javac thinks all developers are retarded. The workaround is to use an IDE which will write this boilerplate for you. Preferred ones are NetBeans--which will make you wonder if its developers have ever used an IDE before--and Eclipse, which will rape your memory manager and also make you wonder if its developers have ever used an IDE before. Using an IDE will also acquaint you with the Java GUI development standards: 1) no single thing should look like any other single thing from the host OS; 2) a dialog that opens in less than 5 seconds is a showstopping bug; 3) there is no rule 3.
    No, there IS a rule 3 but you're not going to get to see it because Eclipse has just hung your entire desktop again.
  • Brian Bobley (unregistered) in reply to eViLegion
    eViLegion:
    Pete:
    eViLegion:
    Pete:
    You are new around here, aren't you?

    No, I'm registered... so, who the fuck are you?

    PS. The meme is "You must be new here."

    Whoooooooosh!

    PS. Whoosh meme only works if you've actually made a funny, and that the person you're wooooshing hasn't understood, as opposed to having completely understood and mocked you for it.

    I think you may be confused into thinking people aren't just mocking you for your belief that a firm understanding of memes makes you somehow superior.

  • Meep (unregistered) in reply to ip-guru
    ip-guru:
    Meep:
    meys ran with their tails between their legs. And the Treaty of Paris was in September.

    You are correct I meant September, cant think why I wrote December.

    Im surprised no one spotted that earlier

    Enjoy the celebrations your side of the pond

    Ugggh the hangover...

  • (cs)

    Java and C++ are both relatively good languages, but their application domain is rather different. (Although I personally advocate for C# to replace Java).

    Java is great for your common "business logic" application. It's simple (don't need to hire smart programmers), has good IDE support, pretty easy to deploy, comes with a vast library and has even more third-party libraries available.

    However, as soon as keywords such as "performance" (now less), "memory usage" (still a huge issue), or "integrates well with the underlying OS" are mentioned... Well, C++ suddenly becomes a pretty good choice.

    Basic C++ isn't more difficult than Java, but C++ offers a lot more features (for one thing, the template system in C++ dwarfs Java's generics implementation -- which consists basically of automated casts from/to Object) for library developers. In return for this deal you'll get less IDE support and more complicated error messages (although projects like Clang attempt to fix both). If you apply the proper patterns, memory management is as easy if not easier than with a GC.

    If you want to see a case study where a C++ implementation might have increased performance by allowing stack-allocated objects and decreasing memory usage, look up a game called "Minecraft".

    Also, Java is ludicrously verbose. Some have called it "COBOL of the 21st century": so simple (as in featureless) even a chimp could use it, extremely verbose, and managers tend to love it.

    Take, for example, the task "open a text file, read it line by line and store it in an array". I'll give you the Perl code:

    open(my $file, "<", "in.txt") or die;
    my @list = <$file>;
    

    Java programmers: please show me the Java code for this without looking it up or using your IDE.

  • Merp-Mop (unregistered) in reply to SeySayux
    SeySayux:

    Java [...] has good IDE support

    [Citation Needed]

  • John M. (unregistered)

    $0.65?! Ripoff. Our sodas are $0.50.

  • Jo (unregistered) in reply to Steve The Cynic
    Steve The Cynic:
    And don't forget that the Java compiler builds code for one architecture - the JVM.

    While I mostly agree with the rest, this is irrelevant, partly because you're answering to a post that said "architecture" and meant "processor model".

    Usually you don't compile a C++ program version separately for each i386 or amd64 model out there. It would just confuse users, who often don't know their processor model anyway. The JVM, on the other hand, can optimize for the exact processor model it's running on, and exploit the newest-and-greatest improvements on the newest-and-greatest CPU model. (What I don't know is how much of that is actually happening, and what the impact is.)

  • Mr. Bob (unregistered) in reply to nope
    "So what happened?"

    "Dick happened."

    My new answer of the day.

  • Dann of Thursday (unregistered) in reply to eViLegion
    eViLegion:
    Pete:
    You are new around here, aren't you?

    No, I'm registered... so, who the fuck are you?

    PS. The meme is "You must be new here."

    Hahaha holy crap. Did...did you not only completely miss the joke of his post, claim you were some kind of regular because you're registered (my god that's hilarious), AND smugly try to correct the use of a meme he didn't even really invoke?

    Man I can't wait for fall so all the cute lil high schoolers are gone again.

  • (cs) in reply to Merp-Mop
    Merp-Mop:
    SeySayux:

    Java [...] has good IDE support

    [Citation Needed]

    Compared to C++, where you're lucky if it correctly figures out the line number where your compiler reported the syntax error was (no live syntax checking possible, as compiling a C++ source file takes 3-5 seconds)? Yes.

    Just forget about stuff like static code analysis, debugger integration, diagram generation, ... unless you purchase some very expensive tool which usually bails out once it has to parse template or preprocessor code.

  • Evan (unregistered) in reply to King Kong
    King Kong:
    TheCPUWizard:
    Well for many yeas now x64 and x86 architecture have been.....*cough*virtualmachine*cough*

    (Long gone are the days when the transistors of the ALU directly switched based on bit patterns of the op codes. Every mainstream modern machine is a virtual API to an internal RISC processor....)

    It comes and it goes. The 8086 was microcoded: the 80186 was 10 times faster at the same clock speed because the microcode had been reduced to silicon. The 80286 was then 2 times faster because the clock speed was increased.

    Some of every mainstream modern machine is microcoded. Particularly legacy instructions that are not used anymore.

    So I think TheCPUWizard was actually not talking about microcode.

    I'm not even remotely up-to-date on my x86 microarchitecture, but several years ago I did quite a bit of reading on the Pentium 4 Netburst for a school report on superpipelines (the original P4 was something like 23 pipeline stages, and the Prescott core rather more!).

    But on the P4, what was actually executed by the back end of the chip was not x86 instructions, rather it was something Intel called microoperations. These aren't really microcode; rather, it's more like the chip contained a hardware x86-to-μop compiler. x86 instructions were translated to μops, and those were what was actually executed. In fact, the L1 icache in the Pentium 4 did not even store x86 instructions -- it stored μops.

    The P4 was not even remotely the first x86 chip to decode x86 instructions to μops, but its distinguishing characteristic and the reason I am using it as an example was the icache, because that is a clear indication that we're not talking about microcode. What I don't know is whether that design has been continued by the Core chips or picked up by AMD or abandoned like many of the other P4 design characteristics. :-)

  • (cs)

    "As the years past by" I believe the word you were looking for is "passed".

  • jumentum (unregistered) in reply to ip-guru
    ip-guru:
    Bananas:
    ip-guru:
    Land of Hope and Glory:
    Good Ridance Day we call it!
    Actually the USA wasn't granted independence until 3rd December 1783 so they haven't even got their holiday correct ;-)
    Not trying to start the war up all over again, but A's independence from B does sort of have, as its fundamental basis, that A doesn't need permission from B.

    :-)

    Good point, I will declare myself independent form the UK & Tel HMRC where to go with their tax bill, How far will i get.

    seriously though you guys can have your Holiday whenever you want & goo luck with it (+ the weather is better in July)

    In some ways it does have to be retrospective. We can look back and say that when the US won the war, as it turned out, they were independent when they said they were. The British turned out not to have the control they thought they did.

    If you stop paying taxes and then end up in prison, we will come to the opposite conclusion about your independence.

  • (cs)

    Java is slow but the usual solution is to throw more hardware at the problem. Java is also annoying to write. Eclipse makes the language somewhat tolerable and IntelliJ makes it slightly more tolerable.

    TRWTF is that Android is Java based so now if you want to develop for half of the mobile devices out there, you have to learn Java. I was hoping to avoid doing that for pretty much my entire life.

  • jay (unregistered) in reply to ip-guru
    ip-guru:
    Land of Hope and Glory:
    Good Ridance Day we call it!
    Actually the USA wasn't granted independence until 3rd December 1783 so they haven't even got their holiday correct ;-)

    No, the US became independent on July 4, 1776. But the British news feed system was written in Java, so they didn't get the message until 1783.

  • jay (unregistered) in reply to ip-guru
    ip-guru:
    Land of Hope and Glory:
    Good Ridance Day we call it!
    Actually the USA wasn't granted independence until 3rd December 1783 so they haven't even got their holiday correct ;-)

    Umm, you know that the U.S. wasn't "granted independence", right? We declared our independence and then fought a war to make that declaration stick. We even wrote up a piece of paper spelling out that declaration. We call it, "The Declaration of Independence".

    Whether the U.S. became independent at the time of the declaration, or when the Treaty of Paris was signed ending the war, or sometime in between -- like, say, after the victory at Yorktown when it became clear that the U.S. had won -- is a matter of definitions. You could debate the exact date of many events. Like, was relativity discovered when Einstein published his first paper on the subject? When the eclipse observations were made that were generally acknowledged to confirm the theory? When the idea first occurred to Einstein? Etc. In real life, we tend to pick one of many possible dates for any anniversary celebrations and just stick with it.

    Okay, long reply to a trivial off-hand comment. I really should get back to work.

  • jay (unregistered) in reply to ip-guru
    ip-guru:
    Good point, I will declare myself independent form the UK & Tel HMRC where to go with their tax bill, How far will i get.

    If Britain had won the war, then we wouldn't be debating when the U.S. became independent. July 4 would probably not be remembered as an interesting day at all. Or it might be remembered in Britain today pretty much the same way they remember November 5.

  • jay (unregistered) in reply to eViLegion
    eViLegion:
    ThePants999:
    Are you saying that, if you were evaluating potential suppliers and that exchange happened in front of you, they'd be top of your list?

    No I'm saying that, if someone like Dick had just put himself (and by extension his company) bottom of the list by being a dick, and if Pete subsequently served Dick a verbal hammering because of that specific dickishness, I would be more lenient on the company because at least Pete knows what he's talking about, and at least someone is prepared to put the incompetent fool in his place.

    Not top of the list, but the car crash that is Dick will have been at least partially offset by liking Pete.

    Assuming other factors are good, and I like the company (other than Dick), then I'd possibly hire them with a contract that stipulate that Dick mustn't be allowed near the project.

    Well, maybe that's how YOU would react if you were the decision maker. I certainly wouldn't see it that way. If representatives of a company I was considering partnering with actually insulted each other and yelled at each other during a sales meeting, I'd see that as a much bigger minus than one ignorant person on the team. That would tell me that this company just doesn't have its act together. If they're going to argue with each other like this during a sales meeting, what are they like when they're supposed to be doing the work?

    You say your respect for Pete would outweigh your disdain for Dick. But why would you assume that when it came time to do the work, Pete would win over Dick? Dick is the boss, not Pete. The fact that the company has made Dick the boss would, by default at least, indicate that upper management respects Dick's ideas more than they do Pete's.

  • jay (unregistered) in reply to eViLegion
    eViLegion:
    Also... why didn't Pete just say (right in front of these potential clients):

    "Dick, shut up... we DO use Java, all the fucking time, and we get great stuff done with it... YOU ARE THE ONLY ONE THAT DISAGREES AND EVERYONE ELSE IN THE OFFICE THINKS YOU'RE A IDIOT BECAUSE OF IT".

    That way, you win the contract, and you become Dicks supervisor.

    The people who are agreeing that this would salvage the company's position all seem to assume that Dick's response would be to just docilely accept that his original statement was foolish. But in real life, I'd think that would be very unlikely. Dick would surely tell Pete, in one way or another, that he is in charge and Pete should shut up. Depending on how volatile he is, that could range from, "Pete, that's enough, we'll talk about this later" to nasty insults of his own. And what then? The two of them having a screaming argument in front of the customer? Yeah, that looks good. I don't recall ever being in a sales meeting where the other company's reps started yelling at each other, but I've had plenty of times when I've been out with friends and two or more people get into a nasty argument. It's very uncomfortable. I just want to get out.

    There may be things Pete could say to salvage the situation. But making nasty insults at Dick is not one of them.

  • -/- (unregistered) in reply to SeySayux
    import java.io.*;
    import java.nio.*
    import java.util.List;
    import java.util.LinkedList;
    
    public class FileUtils {
    
    	public static String[] readLines(String filename) {
    		Path file = Paths.get(filename);
    		try (BufferedReader in = new BufferedReader(new InputStreamReader(Files.getInputStream(file)))) {
    			String line;
    			List<String> lines = new LinkedList<>();
    			while ((line = in.readLine()) != null) {
    				lines.add(line);
    			}
    			return lines.toArray(new String[lines.size()]);
    		} catch (IOException e) {
    			throw new RuntimeException(e);
    		}
    }

    Should be compilable using Java 7 (bot completely sure about the NIO package, though). Yes, it is more verbose, but you write it one time and reduce any further usage to a simple

    String[] lines = FileUtils.readLines("myFile.txt");

    Still more verbose, but handable (even though the usage of arrays should be replaced by lists where possible). On the upside, I do not have to tell the interpreter what type my variable is everytime I use it.;) I like the language, even though it is not the ideal choice when having to script some (more or less) simple tasks - I prefer Python for that:

    with open(filename, "r") as file:
    	lines = file.readlines()

    However, library and IDE support for Java is great, the standard library vast (even though not completely thought through in some places) and the recent release of Java 7 and the future release of Java 8 will bring some nice syntactic sugar to the language.

  • (cs) in reply to -/-
    -/-:
    However, library and IDE support for Java is great, the standard library vast (even though not completely thought through in some places) and the recent release of Java 7 and the future release of Java 8 will bring some nice syntactic sugar to the language.

    [citation needed (again)]

  • (cs) in reply to jay
    jay:
    ip-guru:
    Good point, I will declare myself independent form the UK & Tel HMRC where to go with their tax bill, How far will i get.

    If Britain had won the war, then we wouldn't be debating when the U.S. became independent. July 4 would probably not be remembered as an interesting day at all. Or it might be remembered in Britain today pretty much the same way they remember November 5.

    Hahaaaaaaaaaaaa

  • Impossible (unregistered) in reply to SeySayux
    Compared to C++, where you're lucky if it correctly figures out the line number where your compiler reported the syntax error was (no live syntax checking possible, as compiling a C++ source file takes 3-5 seconds)? Yes.

    Just forget about stuff like static code analysis, debugger integration, diagram generation, ... unless you purchase some very expensive tool which usually bails out once it has to parse template or preprocessor code.

    Are you for real?

    Ever heard of Visual Studio Express (free)? Eclipse CDT? LLVM/clang?

    Impossible software i guess..

  • Impossible (unregistered) in reply to -/-
    List<String> lines = new LinkedList<>();
    even though the usage of arrays should be replaced by lists where possible

    That's the main problem with Java .. Java programmers. Why would you EVER use a linked list for this? The payload of the list (pointer to string) is smaller then the list overhead. And an Array is perfectly fine if you don't want to modify it further.

    Java has it's uses, i have written several applications, both server and client, in it. But you have to remember that in essence, everything in Java is a pointer. The big performance impact doesn't come from the JVM, it comes from pointer chasing across the heap completely trashing the cache. The same is true for everything .net ..

  • -/- (unregistered) in reply to Impossible

    I chose a LinkedList knowing of the performance loss due to two reasons:

    1. Appending to a linked list is cheaper due to not having to repeatedly allocate a new array once the old one is full and we are doing nothing but appending in the code snippet.

    2. We were going to convert the list to an array later, so the cost for accessing an element in a linked list being higher in comparison to an array list was irrelevant. Of course, if a list is used as the result using an array list from the beginning or converting the linked list into an array list would be prefereable.

  • MC (unregistered) in reply to ip-guru

    The US celebrates its declaring of independence, not its "granting" of it by the very authority it no longer recognized.

  • Impossible (unregistered) in reply to -/-
    -/-:
    1) Appending to a linked list is cheaper due to not having to repeatedly allocate a new array once the old one is full and we are doing nothing but appending in the code snippet.

    So, allocating on EVERY insert (LinkedList N allocations) is better then allocating on SOME inserts (ArrayList grow 50% of current size when full, about logN allocations)?

    Even if in the worst case (grows 50% on last insert) ArrayList uses much less memory then LinkedList.

    I honestly cannot think of any use case where the java LinkedList would be preferable (c++ is different because the list element holds the data, not just a pointer like java). Maybe if you do a lot of random inserts .. but even then, cpu's are very good in reading/writing contiguous memory, searching the insert location in a LinkedList may be the bottleneck here (cache misses).

  • tom (unregistered)

    C++ is way more powerful than Java. It allows us to create problems unlike anything you could encounter in Java code. This requires some pretty dark sorcery from developers... and dark sorcery usually means a good pay.

    If you're an aspiring programmer, don't learn Java. Here's your new career plan:

    1. Develop an enterprise app in C++
    2. Make sure it uses multiple inheritance, a mix of pointers and references (pointers to pointers to references to pointers to references to references to function pointers are preferred), some native code, an in-house library for string handling, collections and maths (cause STL sucks and isn't portable enough for a ninja like you).
    3. Be one of the few people in the world remotely likely to understand the mess you created.
    4. $$$
  • -/- (unregistered) in reply to Impossible
    Impossible:
    So, allocating on EVERY insert (LinkedList N allocations) is better then allocating on SOME inserts (ArrayList grow 50% of current size when full, about logN allocations)?
    To be honest, all professional work I've done so far was in Java, which means I do not have extensive knowledge about how long allocating memory actually takes. That said, I assume that allocating a large strip of memory is faster than allocating a number of small parts, which means that allocating a certain number of bytes in one call is most definitely faster. However, I would be willing to wager that allocating a number of small memory parts might be quicker if you have to constantly re-allocate the memory for the array and copy the already existing values into the newly allocated array. Still, this is just a guess and I will be the first to admit to being wrong if that is the case. Link to a reliable source concerning this would be appreciated.
    Even if in the worst case (grows 50% on last insert) ArrayList uses much less memory then LinkedList.
    Actually, I just wrote a huge wall of text countering that argument including a bit of simple math to prove my assumption that an ArrayList with an initial size of 16 elements always needs more space to store all object references due to memory freed by the garbage collector instead of manually once no longer needed, only to find out that an ArrayList starts with an initial size of 10 elements, which leads to it actually using less memory (2060 instead of 2400 Byte, assuming 4 Byte references and a correct calculation). In short, I concede in regard of memory consumption.
  • Gen. George Washington Lincoln (unregistered) in reply to ip-guru

    Actually my dear good fellow, July 4th 1776 is when the Decleration of Indepdence was enacted in the Constitution of the United States of America.

    It was a bold document, the first government based on the principles of Plato's Republic.

    One which fought against the vile imperial slave driving anti human colonial policies of the disgusting British system.

  • Fred Nurk (unregistered) in reply to Steve The Cynic
    Steve The Cynic:
    I'll accept, however, that "10 times faster" is hyperbole, partly because if it were true, the 80186 would have been the first x86-line processor with a superscalar architecture (instructions per clock > 1). And it wasn't. And don't forget that the execution time of most instructions was dominated by their fetch time

    Reading from the 80186 manual 'the time normally required to fetch instructions "disappears" because the [Execution Unit] executes instructions that have already been fetched by the [Bus Interface Unit]'...

    'Execution speed is gained by performing the effective address calculation () with a dedicated hardware adder in the 80186,188 bus-interface unit, rather than with a microcode routine (used by the 8086,88)'

  • gnasher729 (unregistered) in reply to jay
    jay:
    Well, maybe that's how YOU would react if you were the decision maker. I certainly wouldn't see it that way. If representatives of a company I was considering partnering with actually insulted each other and yelled at each other during a sales meeting, I'd see that as a much bigger minus than one ignorant person on the team. That would tell me that this company just doesn't have its act together. If they're going to argue with each other like this during a sales meeting, what are they like when they're supposed to be doing the work?

    You say your respect for Pete would outweigh your disdain for Dick. But why would you assume that when it came time to do the work, Pete would win over Dick? Dick is the boss, not Pete. The fact that the company has made Dick the boss would, by default at least, indicate that upper management respects Dick's ideas more than they do Pete's.

    Here's a question: If you were the decision maker, and Dick being one just sunk the deal, as far as you are concerned, would you, for the benefit of mankind in general, make a call to someone as highly up in the company as possible to tell them that Dick just cost them some major amount of money through ignorance and pigheadedness?
  • gnasher729 (unregistered) in reply to SeySayux
    SeySayux:
    Compared to C++, where you're lucky if it correctly figures out the line number where your compiler reported the syntax error was (no live syntax checking possible, as compiling a C++ source file takes 3-5 seconds)? Yes.

    Just forget about stuff like static code analysis, debugger integration, diagram generation, ... unless you purchase some very expensive tool which usually bails out once it has to parse template or preprocessor code.

    A Clang based C++ compiler like the one in Xcode gives you all that. 3-5 seconds for compiling a C++ source file means you should remove a few ten thousand lines of source from that file :-)

  • Publius (unregistered) in reply to ip-guru
    ip-guru:
    Bananas:
    ip-guru:
    Land of Hope and Glory:
    Good Ridance Day we call it!
    Actually the USA wasn't granted independence until 3rd December 1783 so they haven't even got their holiday correct ;-)
    Not trying to start the war up all over again, but A's independence from B does sort of have, as its fundamental basis, that A doesn't need permission from B.

    :-)

    Good point, I will declare myself independent form the UK & Tel HMRC where to go with their tax bill, How far will i get.

    seriously though you guys can have your Holiday whenever you want & goo luck with it (+ the weather is better in July)

    Well, if you are a) willing to die for it and b) the HMRC is pouring resources into maintaining a global empire with sailboats, and c) get a little help from France, you might be pleasantly surprised...

  • Impossible (unregistered) in reply to -/-
    -/-:
    However, I would be willing to wager that allocating a number of small memory parts might be quicker if you have to constantly re-allocate the memory for the array and copy the already existing values into the newly allocated array. Still, this is just a guess and I will be the first to admit to being wrong if that is the case. Link to a reliable source concerning this would be appreciated.

    Don't know about reliable, but if you google it you will find many results confirming my claims (eg: http://java.dzone.com/articles/java-collection-performance). Best thing is always to benchmark your specific case. What these micro benchmarks don't show is the long term effect of memory fragmentation and gc pressure due to the many small allocations of LinkedList.

  • defaultex (unregistered)

    It's funny how many like to point out the GC in Java, but haven't read about how malloc and free in C++ actually work.

    When you call malloc it searches a linked-list of free memory blocks that is greater-than-or-equal to the size you requested. If it's greater-than it breaks the block up and returns the excess to the free memory listing.

    When you call free it adds the block of memory you called it on to the free memory listing.

    This eventually causes a lot of fragmentation in your programs memory pool, to the point where you wind up requesting a block that is larger than any of the blocks in the free memory listing. When this occurs malloc will stop to defragment the free memory listing.

    Sounds a lot like a garbage collector to me, keep in mind most garbage collectors are hooked to the 'new' keyword.

  • Luiz Felipe (unregistered) in reply to Impossible
    Impossible:
    List<String> lines = new LinkedList<>();
    even though the usage of arrays should be replaced by lists where possible

    That's the main problem with Java .. Java programmers. Why would you EVER use a linked list for this? The payload of the list (pointer to string) is smaller then the list overhead. And an Array is perfectly fine if you don't want to modify it further.

    Java has it's uses, i have written several applications, both server and client, in it. But you have to remember that in essence, everything in Java is a pointer. The big performance impact doesn't come from the JVM, it comes from pointer chasing across the heap completely trashing the cache. The same is true for everything .net ..

    Except .net IL compiler can push local variables on stack instead of the heap, if the object is small. Also .net compiler can do RVO. And to end it, dotnet memory model is more similar to x86/x64 than java, this makes more optimization possible. Java was created to run on just any type of processor, then it cant run fast on none of them.

  • Luiz Felipe (unregistered) in reply to defaultex
    defaultex:
    It's funny how many like to point out the GC in Java, but haven't read about how malloc and free in C++ actually work.

    When you call malloc it searches a linked-list of free memory blocks that is greater-than-or-equal to the size you requested. If it's greater-than it breaks the block up and returns the excess to the free memory listing.

    When you call free it adds the block of memory you called it on to the free memory listing.

    This eventually causes a lot of fragmentation in your programs memory pool, to the point where you wind up requesting a block that is larger than any of the blocks in the free memory listing. When this occurs malloc will stop to defragment the free memory listing.

    Sounds a lot like a garbage collector to me, keep in mind most garbage collectors are hooked to the 'new' keyword.

    Thats is why i suspect that java/dotnet programs are slow not because of the garbage collector, but they are slow because of too much Object Oriented architecture. There is only one way to go faster, you only have to do less things, not more. All those patterns and interfaces and indirections, they are the culprit, not just the garbage collector. Today`s garbage collectors are amazing pieces of engineering, they dont stop the world anymore, they can collect stoping your thread by microseconds, fast than a disk access, and they can do it on a secheduled basis, then you can have predictable performance, this is more important than having no pauses and thats is why cpp programs are more performant, because they recollect memory when ret from subroutine is issued, that is predictable. You cant just say that java program is slow only because of garbage collector, the real culprit is the programmer that doesnot know how to write performant code.

  • Impossible (unregistered) in reply to Luiz Felipe
    Luiz Felipe:
    Except .net IL compiler can push local variables on stack instead of the heap, if the object is small. Also .net compiler can do RVO. And to end it, dotnet memory model is more similar to x86/x64 than java, this makes more optimization possible. Java was created to run on just any type of processor, then it cant run fast on none of them.

    Except it doesn't change anything in this case because a collection is not a local variable. And you cannot optimize this because you could put anything in it that derives from object -> need pointer. That said, .net IS much better in this regard because of better generics. So if you define a collection of a basic type or struct the data will be embedded in the collection similar to c++ not just a pointer to it like in java (java doesn't know generics at bytecode level)

  • gnasher729 (unregistered) in reply to defaultex
    defaultex:
    It's funny how many like to point out the GC in Java, but haven't read about how malloc and free in C++ actually work.

    When you call malloc it searches a linked-list of free memory blocks that is greater-than-or-equal to the size you requested. If it's greater-than it breaks the block up and returns the excess to the free memory listing.

    When you call free it adds the block of memory you called it on to the free memory listing.

    This eventually causes a lot of fragmentation in your programs memory pool, to the point where you wind up requesting a block that is larger than any of the blocks in the free memory listing. When this occurs malloc will stop to defragment the free memory listing.

    I don't know what library you are using, but neither the gcc C library nor the new Clang C library that come with Xcode do that kind of thing. Instead they are using pools for allocations of blocks of same sizes, which has the advantage that blocks of the same type tend to be allocated at nearby addresses and are more cache friendly that way. Allocating or deallocating a block doesn't do any linked lists but just sets one bit per block. On top of that, significant optimisations are done for multi-threaded operations so that different threads allocate from different areas, which avoids locking and/or atomic operations for allocation and deallocation.
  • (cs) in reply to Fred Nurk
    Fred Nurk:
    Steve The Cynic:
    I'll accept, however, that "10 times faster" is hyperbole, partly because if it were true, the 80186 would have been the first x86-line processor with a superscalar architecture (instructions per clock > 1). And it wasn't. And don't forget that the execution time of most instructions was dominated by their fetch time

    Reading from the 80186 manual 'the time normally required to fetch instructions "disappears" because the [Execution Unit] executes instructions that have already been fetched by the [Bus Interface Unit]'...

    'Execution speed is gained by performing the effective address calculation () with a dedicated hardware adder in the 80186,188 bus-interface unit, rather than with a microcode routine (used by the 8086,88)'

    Yes, the two phases (fetch and execute) run in parallel (this instruction executes while the next one is being fetched), *but* for a wide range of common instructions, the next one takes longer to fetch than the current one takes to execute, so improving the execution cycle count doesn't improve the instructions-per-second throughput. Instead, the CPU spends a larger fraction of its time waiting for fetches to complete.

    In the end, the memory subsystem has to be able to keep up with the consumption of instruction space by the execution unit, so improving the execution unit is only part of the answer. Later processors did things like widening the data bus and reducing the cycle count per access, and then later still by adding cache at varying fractions of the main (accelerated) clock frequency. (In the P2/P3 era, L2 cache was usually run at external bus frequency, i.e. 100MHz or so (+/- 33MHz in some cases) while the main CPU clock rose and rose. Later, L2 cache moved on-chip where it could run at half-speed, and so on.)

    Summary: it's an unholy mess.

  • ip-guru (unregistered) in reply to Publius
    c) get a little help from France, you might be pleasantly surprised...
    Help from France? I think I would rather pay the tax!
  • trtrwtf (unregistered) in reply to ip-guru
    ip-guru:
    c) get a little help from France, you might be pleasantly surprised...
    Help from France? I think I would rather pay the tax!

    Hey, it can be handy. If it weren't for the French support in the War for Independence, we might be speaking French today here in Boston.

  • (cs) in reply to SeySayux
    SeySayux:
    Take, for example, the task "open a text file, read it line by line and store it in an array". I'll give you the Perl code:
    open(my $file, "<", "in.txt") or die;
    my @list = <$file>;
    

    Java programmers: please show me the Java code for this without looking it up or using your IDE.

    Not a Javan here, but in Xojo (f/k/a REALbasic), it's this:

    dim fichier as FolderItem, arrivée as TextInputStream, liste(-1) as String fichier = SpecialFolder.Documents.Child("in.txt") arrivée = TextInputStream.Create(fichier) do until arrivée.EOF    liste.Append arrivée.ReadLine loop

Leave a comment on “Classic WTF: Java is Slow!”

Log In or post as a guest

Replying to comment #:

« Return to Article