• Anon (unregistered)

    In 99% of cases where this type of code appears, performance is irrelevant, because for 99% of code, this kind of micro-optimisation is irrelevant. As someone has pointed out, a method named "displayProductInfo" is likely to be doing something slower than throwing an exception. You could just as well optimise for code size.

    A more practical reason that this code pattern is bad practice is that displayProductInfo might sometimes throw an IndexOutOfBoundsException for some reason other than because idx has reached the size of the array. Typically, that reason will be a bug in someone else's code, and/or a deliberate result of some assumption failing, which may therefore be obfuscated.

    In general, it is probably a bad idea to handle a system exception unless you think you know why it was thrown it. In particular, don't do it in general patterns.

  • (cs) in reply to Ross
    Anonymous:
    A colleague once asked if there was a handy perl function which checks if all elements of an array are equal.
    ...
    sort { $a != $b && die } @foo



    The number of unique elements in a perl array...
    sub uniq(@);

    if (uniq @foo == 0) {
    # @foo is empty
    }

    if (uniq @foo == 1) {
    # @foo is non-empty, and
    # all elements of @foo are the same
    }

    if (uniq @foo <= 1) {
    # all elements of @foo are the same
    # or maybe there aren't any
    }

    sub uniq(@)
    {
    return keys %{{ map { $_ => 1 } @_ }};
    }
  • (cs)

    This is actually a practice for 2 real reasons:

    • prodnums is mutable and this is used in a multi-threaded application.
    • prodnums is a collection with no length (basically an incrementer) and does not implement IEnumerable.

    There are time where you don't own an object and have no access to redesign it, so you have to come up with wierd, or "bad" code to work around it.

  • (cs) in reply to masklinn
    masklinn:
    Cyresse:
    I should point out I work in Python where a try-except clause is free.

    Uh, no it isn't, the try clause is almost free but the except one sure isn't. It's pythonic to ask for forgiveness more than permission when you have to ask for forgiveness less than you'd ask for permission.

    Maybe I misworded that. I pretty much meant what you said above... this whole communication thing bites. [:#]

  • diaphanein (unregistered) in reply to Travis

    First point:  many architectures support pre and post increment operations in a single operation.

    Second point:  This sort of breaking may be a bit dangerous.  If I'm not mistaken, it is possible to disable bounds checking on arrays in .Net.  So, with certain optimizations, this code will fail to break correctly.

  • Tony Morris (unregistered)

    On the issue of questioning the authenticity of some of the WTF posts:

    I work for a large software corporation. We distribute our own Java VM and J2EE application server. I used to work on the J2EE app server, but these days work on the J2SE API Specification implementation and extensions. I say "work on" because I currently maintain code written by some monkeys, whilst trying to find the time to rewrite it properly. I can assure you that what you witness of The Daily WTF is almost certainly authentic. I have code in front of me that would make every post that I have ever seen, look like the work of a knowledgeable person. It far exceeds the magnitude of stupidity that you encounter in your daily wander to this website. I am not exaggerating in any way. You might ask, "am I in a corner case situation?". However, this code in front of me right now is certainly not the first, and I very much doubt, the last, example of the lack of competence in our industry that I have witnessed. I simply don't post it because my children are hungry and depend on me to help them out, but if my situation were to change, I believe that disclosure of the details that I talk about would permit some of your perspectives to change.

    Once you have denigrated to the level of stupidity that I witness each day, you just shake your head, chuckle, and move on, always thinking about how you are going to get yourself out of this ridiculous joke. I applaud those who do not have to deal with it, but with that applause comes a caution, since it is you that is the corner case. That is, you might be able to demonstrate some level of competency and reap the rewards as a result, but this is not the most common situation, at least, from my humble observations. This industry rides entirely on egotism and not technical competence; for example, "appearing to know what you are doing" as opposed to "knowing what you are doing". It is not how things appear that is important, it is how things are, and this industry is how it is - fucked.

  • lagroue (unregistered)

    No wtf in any manner. A readable example :

    try: self.connection.close() del self.connection except: pass # connection was cleaned already

  • William Hughes (unregistered)

    Unfortunately, some API's force this on you.

    Case in point: The UPnP Internet Gateway Device protocol specifies the following:

    2.4.14.GetGenericPortMappingEntry This action retrieves NAT port mappings one entry at a time. Control points can call this action with an incrementing array index until no more entries are found on the gateway. If PortMappingNumberOfEntries is updated during a call, the process may have to start over. Entries in the array are contiguous. As entries are deleted, the array is compacted, and the evented variable PortMappingNumberOfEntries is decremented. Port mappings are logically stored as an array on the IGD and retrieved using an array index ranging from 0 to PortMappingNumberOfEntries-1.

    Unfortunately, they never expose the method to get StateVariables.

  • (cs) in reply to Anon

    Anonymous:
    In 99% of cases where this type of code appears, performance is irrelevant, because for 99% of code, this kind of micro-optimisation is irrelevant. As someone has pointed out, a method named "displayProductInfo" is likely to be doing something slower than throwing an exception. You could just as well optimise for code size.

    I don't think anyone is arguing that the reason not to do this is performance reasons.  There were people saying to do it for performance reasons.  People are explaining that is not true.

  • (cs) in reply to lagroue

    Anonymous:
    No wtf in any manner. A readable example : try: self.connection.close() del self.connection except: pass # connection was cleaned already

    You're not comparing apples with apples.

    A Python version of this would be.

    try:
      idx = 0
      while True:
        displayProductInfo(prodnums[idx])
        idx += 1
    except IndexError:
      pass
     


    Which is incredibly unpythonic and ugly. A pythonic way would be - 

     for item in prodnums:  
        displayProductInfo(item)
       


    Even if you couldn't bring yourself to do it simply,  any of the following would've been more suitable.


      idx = 0
      while idx < len(prodnums):
         displayProductInfo(prodnums[idx])
         idx += 1

    or

    for idx in range(len(prodnums)):
       displayProductInfo(prodnums[idx])
     

    The WTF becomes even more apparent when turned into Python.

    So ugly, so spiky... (mind, Java always looks bad directly translated to Python.)


  • (cs) in reply to Cyresse
    Cyresse:
    So ugly, so spiky... (mind, Java always looks bad directly translated to Python.)


    Surely you've seen it translated to perl?  It's like English to Chinese.
  • Sigi (unregistered)

    I've seen this before in a book about efficient Java programming mentioned as an anti-pattern.

  • Monkey Fuel (unregistered) in reply to CornedBee

    This is an attempt at optomization.
    One of the many incorrect myths about java performance is that this method of iterating over an array is faster using a for loop.
    Perhaps it was once so in some dodgy beta version of java 1.0, but it's certainly no longer the case.


  • (cs) in reply to emptyset
    emptyset:
    <font face="Courier New" size="2">in my country, a sad majority of people listen to AM radio, read books by danielle steele, and have interpreted genesis to reach the conclusion the snake turned into tyranosaurus rex and leapt out of the tree of knowledge after adam's fall.  when you expect these people to employ 'logic' to solve a computational problem, hilarity ensues.</font>


    Well said, sir.
  • Alan (unregistered) in reply to Danny

    At least in Python code that catches some of the built-in exceptions is faster than code which avoids it. For instance, there's this python benchmark where one of the code fragments involves creating 10 000 hash keys and then incrementing them a few times.

    In exception manner the code is written as something like

        hash = {}
        for x in range(1, 10):
           for y in range(1, 10000)
              try:
                hash[y] += 1
              except NameError:
                 hash[y] = 1

    Incidentally, this try+catch version is faster if large number of iterations of the outer loop is run -- that is, if the exception throwing it rare. So this may be a valid optimizing for some tight spots where you'd almost always have the key already present.

    One other way to write the core is, for instance,

        setattr(hash, getattr(hash, y, 1))

    or

        if y in hash:
           hash[y] += 1
        else:
           hash[y] = 1

    The setattr() version was the slowest of them all.h

  • Emmanuel D. (unregistered) in reply to kipthegreat

    Using an integer as the floaw control variable, you'll hardly notice any difference. Now, using a complex iterator might change everything, because you'll save a useless temporary copy of the variable by doing a pre-increment instead of a post-increment (you remember: post-increment will store your current value, increment and then return the stored value, while pre-increment is suposed to modify the value and then return it, saving the variable store).

    A new way of thinking is now open to WTFers :)

    Regards,

    Emmanuel D.

  • Rev. Johnny Healey (unregistered) in reply to Lyndon

    Since the programmer is using an array (sequential memory accessed by index) this really isn't any faster.  However, for linked lists, this is a much faster way to iterate through all of the items.  Finding the number of items in a linked list involves iterating through every object, so you might as well process them while you're doing it.

    Of course, that doesn't apply here.

  • BogusDude (unregistered) in reply to Travis
    Anonymous:

    God you people crack me up...I love when people give excuses like "it is faster" for examples like this.  If this is how someone "optimizes" code then I feel sorry for their employer and coworkers.  An extra if statement from checking if the index is past the range of elements is not going to see any noticeable performance.

    That is almost as bad as the people who say that pre increment in a for loop is faster than post increment...ridiculous

    Actually, Preincrement in a loop IS faster IN C++, provided your increment variable is NOT OF NATIVE TYPE. If you have ever implemented the post increment operator in C++, you would understand that in order to do post increment, you need to do something like:

    MyClass MyClass::operator++() { MyClass temp(*this); // *********HERE ++this->variableToIncrent; return temp; }

    MyClass MyClass::operator++(int) { ++this->variableToIncrent; return *this; }

    The HERE comment in my sample function indicates where you are creating a temporary variable, to store the result, while you increment this object and then return the temporary variable. This will not only create ONE temporary but TWO temporary variables, because your statement might look like this:

    MyClass myClass;

    ...

    if (myClass++) { }

    What now happens is that your ++ operator creates a temporary internally. When your function returns, C++ creates a hidden temporary to store the result of the ++ operator call and initialises it to the value of the one you returned.

    So, if you are simply doing: for (int i = 0; i < 10; ++i) it doesn't matter where you put the ++, but if you are using the STL with Iterators, it's very important you put the ++ before the variable.

    Primitive types don't matter because the compiler will convert the postincrement to the preincrement for you if you don't use the return value.

    Read Herb Sutter's book.

    PS. It's been a while, so I might have my operator++() and my operator++(int) mixed up.

    Also, I don't know the first thing about Java.

  • (cs) in reply to Rain dog
    Anonymous:

    Except in the case of overloaded operators, the preincrement operator and post increment operators are the same now.

    No they're not. Whoever told you that?
  • (cs) in reply to NoName
    Anonymous:
    Anonymous:
    Anonymous:
    Anonymous:
    That is almost as bad as the people who say that pre increment in a for loop is faster than post increment...ridiculous




    Yes, there was a CPU/compiler combination where running a for() loop backwards was faster than running it forwards. But that doesn't mean that every for() loop should be run backwards.

    LMAO!

    Do you really think that pre-increment is the same as decrement?
  • Radu S. (unregistered) in reply to kjacquemin

    Good point ;), but I think it was missed in the middle of pre/post increment debate.

  • Radu S. (unregistered) in reply to kjacquemin
    kjacquemin:

    I have had to code this way due to conditions beyond my control.  For example:

    try
    {
      int idx = 0;
      while (true)
      {
        Foo.displayProductInfo(idx);
        idx++;
      }
    }
    catch (IndexOutOfBoundException ex)
    {
      // nil
    }
    <FONT face="Courier New">

    <FONT face="Courier New">The dolt that designed the Foo object provided no way to determin the max value that the idx var can have.</FONT>

    <FONT face="Courier New"> 

    </FONT></FONT>

    This should have been added to my previous post... but it was my first reply and I only did see the quote button after posting...

  • (cs) in reply to Rev. Johnny Healey
    Anonymous:
    Since the programmer is using an array (sequential memory accessed by index) this really isn't any faster.  However, for linked lists, this is a much faster way to iterate through all of the items.


    No. In a linked list, you would test for "is there a next element" as termination condition - NOT catch a NullPointerException.
  • (cs) in reply to bit
    bit:
    I'm not sure I understand what many of you are saying, because I don't know Java. But if we replace that while(true) with a for idx := 1 to len(array), neither the program nor we are checking bounds. So, I don't grasp that thread about the slowness of bound checking vs. exceptions. We need no checking at all. Where am I wrong?


    In your lack of knowledge about Java. Java internally checks bounds for EVERY access to an array. This all-too-common antipattern almost certainly arose because C programmers who hear about this for the first time are often horrified by the percieved inefficiency and any way to eliminate it appeals to them.

    Anyway, current JIT compilers eliminate the bounds check in simple loop cases like this (i.e. 99% of the time).
  • (cs) in reply to Baf
    Anonymous:

    The thing that makes me skeptical is that exceptions in early Java were really slow.  Maybe they've improved over the years; I haven't been keeping track.  But in the old days, at least, every exception was a new object that the system had to allocate and construct.


    And it's not just any old object. An exception contains a stack trace, which is constructed along with the exception, and that is really, REALLY slow.

  • Nick Coghlan (unregistered) in reply to makomk

     

    You know, I think Python does something like this internally when you use an iterator object - it signals the end-of-list by throwing an exception.

    Indeed it does. However, that's a natural consequence of the fact you can make your own iterators for use in for loops by writing a class with __iter__() and next() methods. The next() method has to be able to return any value as the next item in the for loop, but it also needs to be able to tell the for-loop machinery that there are no more values so the loop should terminate. The obvious answer to that problem is to raise an exception (StopIteration, to be precise) to say "I'm done".

    Cheers,
    Nick.
  • (cs) in reply to Arachnid
    Anonymous:
    Anonymous:
    Yes, there was a CPU/compiler combination where running a for() loop backwards was faster than running it forwards. But that doesn't mean that every for() loop should be run backwards.

    Actually, that's universally true. When your loop ends at 0, all that's required is a CMP operation and checking the zero bit in the flags. When your loop ends elsewhere, you have to subtract the end value from the loop and check if it's zero or positive.

    Except, at least on x86, CMP is the same thing as a subtraction that doesn't store the result. From the Intel instruction set reference:

    Intel:
    The comparison is performed by subtracting the second operand from the first operand and then setting the status flags in the same manner as the SUB instruction.

    IA-32 Intel® Architecture Software Developer's Manual, Volume 2: Instruction Set Reference, 2003 edition, p. 3-85

    In my quick test with GCC, the only differences between a count-up and count-down loop are the initial value of the index, what the index is compared to, and which type of conditional jump is used after the comparison.

  • (cs) in reply to Arachnid

    (Take two. Hopefully, this time it won't mess up too much.)

    Yes, there was a CPU/compiler combination where running a for() loop backwards was faster than running it forwards. But that doesn't mean that every for() loop should be run backwards.
    Actually, that's universally true. When your loop ends at 0, all that's required is a CMP operation and checking the zero bit in the flags. When your loop ends elsewhere, you have to subtract the end value from the loop and check if it's zero or positive.

    Except, at least on x86, CMP is the same thing as a subtraction that doesn't store the result. From the Intel instruction set reference:

    The comparison is performed by subtracting the second operand from the first operand and then setting the status flags in the same manner as the SUB instruction.

    IA-32 Intel® Architecture Software Developer's Manual, Volume 2: Instruction Set Reference, 2003 edition, p. 3-85

    In my quick test with GCC, the only differences between a count-up and count-down loop are the initial value of the index, what the index is compared to, and which type of conditional jump is used after the comparison.

  • Anonymous (unregistered) in reply to cbane

    Just to clear this Java thing up:

    public class LoopTest {

      public static void main(String[] args) {
        int[] arr = new int[10000000];
        for (int i = 0; i < arr.length; i++) {
          arr[i] = (int) (Math.random() * 10);
        }
        long timeA = 0; long timeB = 0; long timeC = 0; long timeD = 0;
        long foores = 0;
        long timer;
        for (int k = 0; k < 10; k++) {
          foores = 0;
          System.out.println("A");
          timer = System.currentTimeMillis();
          for (int i = 0; i < arr.length; i++) {
            foores += arr[i];
          }
          timeA += System.currentTimeMillis() - timer;
          foores = 0;
          System.out.println("B");
          timer = System.currentTimeMillis();
          for (int val : arr) {
            foores += val;
          }
          timeB += System.currentTimeMillis() - timer;
          System.out.println("C");
          foores = 0;
          timer = System.currentTimeMillis();
          int size = arr.length;
          for (int i = 0; i < size; i++) {
            foores += arr[i];
          }
          timeC += System.currentTimeMillis() - timer;
          // Ugly try catch
          System.out.println("D");
          foores = 0;
          timer = System.currentTimeMillis();
          try {
            int idx = 0;
            while (true) {
              foores = arr[idx++];
            }
          } catch (IndexOutOfBoundsException e) {
            // noop
          }
          timeD += System.currentTimeMillis() - timer;
        }
        System.out.println();
        System.out.println("Results:");
        System.out.println("A - arr.length:     " + timeA);
        System.out.println("B - int val: arr:   " + timeB);
        System.out.println("C - store size:     " + timeC);
        System.out.println("D - ugly try/catch: " + timeD);
      }
    }

    So this tries several different styles of iterating the loop, does it ten times (to get the hotspot compiler up - I shouldn't count the first 3 or 4 values, but I'm too lazy ...). This is the result on my machine:

    Results:
    A - arr.length:     675
    B - int val: arr:   630
    C - store size:     624
    D - ugly try/catch: 541

    ... where the results are milliseconds. So yes, you save about a sixth in time, when doing the cough very common case of iterating over an extremely large array and executing a completly trivial operation for each element. I really can't think of any situation where this is justified.

    Interesting btw: if you don't do the JVM warming, then arr.length and store size are approximatly equal, but the foreach loop is significantly slower, though again not enough to consider it relevant in any way.

  • Anonymous Coward (unregistered) in reply to Ytram
    Ytram:

    A few test runs resulted in:
    Bad Loop: 718 milliseconds
    Good Loop: 15 milliseconds

    Bad Loop: 671 milliseconds
    Good Loop: 15 milliseconds

    Bad Loop: 671 milliseconds
    Good Loop: 31 milliseconds

    Bad Loop: 765 milliseconds
    Good Loop: 15 milliseconds

    It might have to do with the fact that your displayProductInfo sub doesn't actually do anything.

    PS:  I know my method of timing is not too precise, but with the large disparity in times, it is still useful.  Also, most of this is moot anyways, as the example is java and both of our examples were .NET.

    I bet you ran it in the debugger.  Compile as release code and try again, there's a huge speed difference in exception handling.

     

  • Anonymous (unregistered) in reply to Anonymous

    Update: if you ran the JVM with -server setting then you'd get this:

    Results:
    A - arr.length:     13
    B - int val: arr:   40
    C - store size:     44
    D - ugly try/catch: 308

    so once compiled to machine code, the try catch thingie really sucks. So the tip was maybe indeed correct in Java 1.0 times ... interesting to note btw: this is a 2 GHz box, so there seems to be some nice pipelining going on as it does about 2,000,000,000 additions per seconds. Or something is optimized away ...

  • (cs) in reply to Anonymous
    Anonymous:
    Just to clear this Java thing up:

    public class LoopTest {

    So this tries several different styles of iterating the loop, does it ten times (to get the hotspot compiler up - I shouldn't count the first 3 or 4 values, but I'm too lazy ...).


    10 iterations are WAY too few , your code is probably never actually compiled, only the server JVM seems to do it. Microbenchmarks like this are REALLY hard to get right on a complex system like the Hotspot JVM:


    http://www-128.ibm.com/developerworks/java/library/j-jtp02225.html?ca=drs-j0805

  • C++ (unregistered)

    The real WTF is the language itself, which teaches its programmer not to worry about the "simple"  things to get to build the "great" things. I hope you all Java programmers sleep better tonight knowing that "someone" is there watching on your last "greatest" invention..

    "Whoever is careless with the truth in small matters cannot be trusted with important matters."- Albert Einstein


  • Anonymous (unregistered) in reply to brazzy

    You're of course right, and fiddling around with the test a little bit more shows that it's definetly not working correctly. E.g. the numbers with -server are also wrong.

    But the point is - I think - valid, the exception try/catch thing is not giving any performance boost that is in any way significant. Actually, with compiled bytecode it seems to hurt performance, albeit also more or less insignificantly.

  • Anonymous Coward (unregistered) in reply to Anonymous Coward
    I:

    I bet you ran it in the debugger.  Compile as release code and try again, there's a huge speed difference in exception handling.

    I copied the code and tried it for myself.

    10 results of Ytram's code, run as a release build outside the debugger, show that exceptions are slower (as expected ;)), but not as badly as [s]he found:

    Bad Loop: 31 milliseconds
    Good Loop: 31 milliseconds
    Bad Loop: 15 milliseconds
    Good Loop: 15 milliseconds
    Bad Loop: 15 milliseconds
    Good Loop: 15 milliseconds
    Bad Loop: 31 milliseconds
    Good Loop: 15 milliseconds
    Bad Loop: 15 milliseconds
    Good Loop: 15 milliseconds
    Bad Loop: 31 milliseconds
    Good Loop: 15 milliseconds
    Bad Loop: 31 milliseconds
    Good Loop: 15 milliseconds
    Bad Loop: 31 milliseconds
    Good Loop: 15 milliseconds
    Bad Loop: 31 milliseconds
    Good Loop: 15 milliseconds
    Bad Loop: 31 milliseconds
    Good Loop: 15 milliseconds

    This kind of expection handling remains crappy coding, of course.

  • (cs)

    Forget the benchmarking, the exception method is slower than the standard for loop IN EVERY CASE, no matter how long the array is. The HotSpot VM optimizes away the double index check, leaving you with only one check. The exception method also does one check per iteration. The only difference is that an exception is thrown and caught. Unused try-catch blocks are free, but actually throwing an exception is VERY expensive, as it dumps a complete stacktrace.

    Plus, as others have stated before, premature optimization is almost always counterproductive.

  • nick-less (unregistered)

    You may not like it, but a loop like this is considerably faster a normal loop for short loops

    Just run
            String arr[] = new String[10000000];
            long start = System.currentTimeMillis();
            try {
                int idx=0;
                while (true) {
                    arr[idx++]="";
                }
            } catch (IndexOutOfBoundsException ex) {
                
            }

            System.out.println("1st took "+ (System.currentTimeMillis()-start));
            start = System.currentTimeMillis();
            
            for (int i=0;i<arr.length;i++) {
                arr[i]="";
            }

            System.out.println("2nd took "+(System.currentTimeMillis()-start));

    I've seen books where this was recommended as "preferred way of iterating over arrays".



  • Anonymous (unregistered) in reply to nick-less

    Well, as the other poster stressed, this is nearly impossible to test reliably. Did you try running it in a server JVM? Did you warm the HotSpot compiler?

  • (cs) in reply to Anonymous
    Anonymous:
    this is a 2 GHz box, so there seems to be some nice pipelining going on as it does about 2,000,000,000 additions per seconds. Or something is optimized away ...

    Most modern JVM implement a JIT compiler, which means that loading times (VM startup & program loading) are a bit longer, but your code is basically running at native speed, there is still a bit of bloat, but on very optimisable code (such as this, light repeated simple operations) you may reach up to C-level speed.

    Another drawback of JIT compiling VMs is that they use more memory than "regular" interpreting ones.

  • (cs) in reply to masklinn

    In regards to pre-vs post increment

    in the context of the for loop or any other "just increment the damn thing" situation, the resulting assembly code is identical:
    9:        ++i;
    0040102F   mov         eax,dword ptr [ebp-4]
    00401032   add         eax,1
    00401035   mov         dword ptr [ebp-4],eax
    10:       i++;
    00401038   mov         ecx,dword ptr [ebp-4]
    0040103B   add         ecx,1
    0040103E   mov         dword ptr [ebp-4],ecx

    In the context of assignment:

    10:       j = ++i;
    00401036   mov         eax,dword ptr [ebp-4]
    00401039   add         eax,1
    0040103C   mov         dword ptr [ebp-4],eax
    0040103F   mov         ecx,dword ptr [ebp-4]
    00401042   mov         dword ptr [ebp-8],ecx
    11:       j = i++;
    00401045   mov         edx,dword ptr [ebp-4]
    00401048   mov         dword ptr [ebp-8],edx
    0040104B   mov         eax,dword ptr [ebp-4]
    0040104E   add         eax,1
    00401051   mov         dword ptr [ebp-4],eax

    4 movs, 1 add each, order is different. Probably very close in timing.

    Of course, that applies to primitive types.

    Objects are a different. With pre-inc, the object can simply return a reference to itself, post-inc, the object must first copy itself before incrementing, then return that copy (which could result in 2 copy constructor calls, but some compilers catch this situation, causing only 1 cc call.)


  • nick-less (unregistered) in reply to Anonymous

    >> Well, as the other poster stressed, this is nearly impossible to test reliably. Did you try running it in a server JVM? Did you warm the HotSpot compiler?

    Yes and yes,  it's always a litte faster if the overhead generated by the exception is smaller than the execution time of the loop. The byte code generated is shorter and  the bound checking is moved from the byte code to the vm where it happens anyway...


    However, as other posters said, it doesn't make sense in 98 percent of all cases where glueless programmers use it...

  • (cs) in reply to C++
    Anonymous:
    The real WTF is the language itself, which teaches its programmer not to worry about the "simple"  things to get to build the "great" things. I hope you all Java programmers sleep better tonight knowing that "someone" is there watching on your last "greatest" invention..

    "Whoever is careless with the truth in small matters cannot be trusted with important matters."- Albert Einstein


    It is exactly that kind of thinking that leads to this kind of crappy code, in C as much as in Java. Someone thought they needed to take care of something as small as the performance impact of array bounds checking and wrote code that is BOTH slower and less maintainable.

    I sleep quite well knowing that real Java programmers write fast and clear code while paying attention to the big picture, while those who are C/C++ programmers at heart get stuck trying to micromanage stuff.
  • (cs) in reply to nick-less
    Anonymous:
    You may not like it, but a loop like this is considerably faster a normal loop for short loops

    Just run
            String arr[] = new String[10000000];
            long start = System.currentTimeMillis();
            try {
                int idx=0;
                while (true) {
                    arr[idx++]="";
                }
            } catch (IndexOutOfBoundsException ex) {
                
            }

            System.out.println("1st took "+ (System.currentTimeMillis()-start));
            start = System.currentTimeMillis();
            
            for (int i=0;i<arr.length ;i="" {="">
                arr[i]="";
            }
    </arr.length>

            System.out.println("2nd took "+(System.currentTimeMillis()-start));

    I've seen books where this was recommended as "preferred way of iterating over arrays".


    Wow, you got a lot of course to defend this kind of crap. Please tell me what books that were so that I can warn people about it. As for the code, it's a fine example for a flawed microbenchmark as described in the link I cited. Even if it were not flaw, it would only confirm the the difference in performance is so small that it's meaningless compared to the loss of code clarity.

  • (cs) in reply to Nick Coghlan
    Anonymous:
     
    You know, I think Python does something like this internally when you use an iterator object - it signals the end-of-list by throwing an exception.

    Indeed it does. However, that's a natural consequence of the fact you can make your own iterators for use in for loops by writing a class with __iter__() and next() methods. The next() method has to be able to return any value as the next item in the for loop, but it also needs to be able to tell the for-loop machinery that there are no more values so the loop should terminate. The obvious answer to that problem is to raise an exception (StopIteration, to be precise) to say "I'm done".

    Cheers,
    Nick.

    Yes, but it's still using an exception to signal something non-exceptional. Personally, I've got nothing against this - it works, and it's relatively elegant. I'm just surprised that the "exceptions are for exceptional circumstances" people haven't complained about it yet.

  • (cs) in reply to brazzy
    brazzy:

    I sleep quite well knowing that real Java programmers write fast and clear code while paying attention to the big picture, while those who are C/C++ programmers at heart get stuck trying to micromanage stuff.


    There has to be a logical flaw there, you put fast and java in the same sentence. C/C++ done right requires little "micromanagement."
  • zootm (unregistered) in reply to Mike R
    Mike R:
    brazzy:

    I sleep quite well knowing that real Java programmers write fast and clear code while paying attention to the big picture, while those who are C/C++ programmers at heart get stuck trying to micromanage stuff.


    There has to be a logical flaw there, you put fast and java in the same sentence. C/C++ done right requires little "micromanagement."
    It does, however, encourage micromanagement. Mainly because most (all?) valid uses of C(++) these days are for systems where optimisation beyond that which is provided by more modern languages.
  • zootm (unregistered) in reply to zootm

    What the hell makes the code do that?!

  • (cs) in reply to dubwai
    dubwai:

    Anonymous:
    In 99% of cases where this type of code appears, performance is irrelevant, because for 99% of code, this kind of micro-optimisation is irrelevant. As someone has pointed out, a method named "displayProductInfo" is likely to be doing something slower than throwing an exception. You could just as well optimise for code size.

    I don't think anyone is arguing that the reason not to do this is performance reasons.  There were people saying to do it for performance reasons.  People are explaining that is not true.



    Hey dubwai, is that Bootsy Collins?
  • (cs) in reply to C++

    Anonymous:
    The real WTF is the language itself, which teaches its programmer not to worry about the "simple"  things to get to build the "great" things.

    "Whoever is careless with the truth in small matters cannot be trusted with important matters."- Albert Einstein

    <FONT face="Courier New" size=2>isn't the entire heart of programming abstraction?  i think high-level languages are pretty cool since they eliminate the tedium assocated with many trivial programming tasks.  unfortunately, i haven't yet had the benefit of encountering such a language in this here "real-world" situation.</FONT>

  • nick-less (unregistered) in reply to brazzy

    >> Wow, you got a lot of course to defend this kind of crap.

    I'm not trying to defend this crap, I'm trying to explain where this kind of code orginates from... ;-)

Leave a comment on “Array of Hope”

Log In or post as a guest

Replying to comment #:

« Return to Article