• (cs) in reply to emptyset
    emptyset:

    <FONT face="Courier New" size=2>i have seen this happen.  let's call it a consequence of our times.  as someone quite dear to me once said: "average intelligence, while good, still implies that half of those surveyed are below its threshold."</FONT>

    Depending on your relationship with said person, you might want to explain the difference between mean (average) and median.

  • (cs) in reply to Kodi
    Kodi:

    It's a good idea, but it's a new idea; <?xml:namespace prefix = o ns = "urn:schemas-microsoft-com:office:office" /><o:p></o:p>

    therefore, I fear it and must reject it.. <o:p></o:p>

    Since when did the definition of 'good' become "ridiculously asinine"?

  • (cs) in reply to Anonymous

    Anonymous:
    Assuming it is idx++..
    exceptions are faster than conditionals!!!1oneone

    No, they are not.  At least not in Java.  They are thousands of times slower which still isn't much becasue conditionals are extremely fast.

    Where do people get these ideas?

  • LotJ (unregistered) in reply to heinzkunz

    CornedBee:
    That doesn't apply to Java, though (if this is, as I strongly suspect, Java), because you can't apply ++ to anything complex there.

    No, as with many things this does not apply to Java.  However, given the amount of stupid stuff I've seen Java programmers do in C++, simply knowing that every language is not as braindead as Java is a step in the right direction.

    Anonymous:
    By the way: "Premature optimization is the root of all evil (or at least most of it) in programming.” — Donald Knuth

    Premature optimization is different than good coding habits.

  • NoName (unregistered)

    It was once in some early, fscking Java tutorial. You know, such a thing where some clueless idiot thinks he made the greatest invention since sliced bread and therefore feels the need to pass such nonsense as "good practice".

  • (cs) in reply to kipthegreat
    kipthegreat:
    dammit.. the second statement is supposed to be:

    for (int i = 0; i < x; ++i) {   statements;   }

    You mixed the two statements. ++i is trivially faster than i++ because i++ evaluates the expression twice (before incrementation and after incrementation) while ++i only does it once (after). Then again, even Javascript doesn't yield any significant difference between both styles on 10.000.000 cycles on the 3 main browsers (Opera, Firefox and MSIE).

    Cycling to 0 (for(i=max-1; i>=0; --i) instead of for(i=0; i<max; ++i), on the other hand, yields surprisingly impressive results under Firefox (50-70% increase in looping speed with firefox 1.0.6), doesn't do anything significant on Opera or MSIE (shows how unoptimized Firefox' javascript loops are)</p>

  • Rain dog (unregistered) in reply to Travis
    Anonymous:

    Anonymous:
    It's faster because the array library that throws the exception must already be checking whether the index is in range or not. Adding another check in application code is just duplicating work. (NB Not that I think programming this way is a good idea)

    God you people crack me up...I love when people give excuses like "it is faster" for examples like this.  If this is how someone "optimizes" code then I feel sorry for their employer and coworkers.  An extra if statement from checking if the index is past the range of elements is not going to see any noticeable performance.

    That is almost as bad as the people who say that pre increment in a for loop is faster than post increment...ridiculous



    Preincrement used to be faster on certain c/c++ compilers because of various optimizations that the compiler did or did not make.

    In c/c++ today however, compilers are far more advanced at optimizations than they were even 10 years ago.
  • NoName (unregistered) in reply to Graham P
    Anonymous:
    It's faster because the array library that throws the exception must already be checking whether the index is in range or not. Adding another check in application code is just duplicating work. (NB Not that I think programming this way is a good idea)


    No it is not faster. It prevents the hotspot-compiler from removing the array out of bounds checking.
  • Rain dog (unregistered) in reply to Anonymous
    Anonymous:
    Anonymous:
    That is almost as bad as the people who say that pre increment in a for loop is faster than post increment...ridiculous


    Uh, it is faster.  Trivially so, but when there is no downside to using pre-increment (other than learning a new style) and a minor benefit, why would you ever not?


    Except in the case of overloaded operators, the preincrement operator and post increment operators are the same now.
  • (cs) in reply to masklinn
    masklinn:
    kipthegreat:
    dammit.. the second statement is supposed to be:

    for (int i = 0; i < x; ++i) {   statements;   }

    You mixed the two statements. ++i is trivially faster than i++ because i++ evaluates the expression twice (before incrementation and after incrementation) while ++i only does it once (after). Then again, even Javascript doesn't yield any significant difference between both styles on 10.000.000 cycles on the 3 main browsers (Opera, Firefox and MSIE).

    Cycling to 0 (for(i=max-1; i\>=0; --i) instead of for(i=0; i<MAX; p are)< loops javascript Firefox? unoptimized how (shows MSIE or Opera on significant anything do doesn?t 1.0.6), firefox with speed looping in increase (50-70% Firefox under results impressive surprisingly yields hand, other the ++i),>

    Let's try that again

  • Travis (unregistered) in reply to Anonymous

    Anonymous:
    Anonymous:
    That is almost as bad as the people who say that pre increment in a for loop is faster than post increment...ridiculous


    Uh, it is faster.  Trivially so, but when there is no downside to using pre-increment (other than learning a new style) and a minor benefit, why would you ever not?

    Except that any decent compiler would probably take care of this detail for you...if pre versus post increment does not change the for loop whatsoever...and it provides some sort of speed benefit, then why wouldn't a compiler writer do it? 

  • (cs) in reply to masklinn
    masklinn:
    masklinn:
    kipthegreat:
    dammit.. the second statement is supposed to be:

    for (int i = 0; i < x; ++i) {   statements;   }

    You mixed the two statements. ++i is trivially faster than i++ because i++ evaluates the expression twice (before incrementation and after incrementation) while ++i only does it once (after). Then again, even Javascript doesn't yield any significant difference between both styles on 10.000.000 cycles on the 3 main browsers (Opera, Firefox and MSIE).

    Cycling to 0 (for(i=max-1; i\>=0; --i) instead of for(i=0; i<MAX; ++i), the other hand, yields surprisingly impressive results under Firefox (50-70% increase in looping speed with firefox 1.0.6), doesn?t do anything significant on Opera or MSIE (shows how unoptimized Firefox? javascript loops are)< p>

    Let's try that again

    Third time's the charm?

  • (cs) in reply to AndrewVos
    AndrewVos:

    <font color="#000000">Correct me if im wrong... just had to test it. Not that I would ever use it!</font>

    <font color="#008000" size="1">    Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click
            Me.CreateArray()
            Me.WTF()
            Me.NonWTF()
        End Sub
        Private Sub CreateArray()
            Dim x As Integer
            For x = 0 To Me.Array.Length - 1
                Me.Array(x) = "What the Fuk"
            Next x
        End Sub
        Private Array(1000000) As String
        Private Sub WTF()
            Dim currentTickCount As Long = Now.Ticks
            Try
                Dim idx As Integer
                Do While (True)
                    Me.displayProductInfo(Me.Array(idx))
                    idx += 1
                Loop
            Catch ex As IndexOutOfRangeException
            End Try
            Dim timeTaken As Long = Now.Ticks - currentTickCount
            Dim timeTakenInSeconds As Double = timeTaken / TimeSpan.TicksPerSecond
            Trace.WriteLine("WTF TOOK : " & timeTaken & " TICKS (" & timeTakenInSeconds & ")")
        End Sub
        Private Sub NonWTF()
            Dim currentTickCount As Long = Now.Ticks
            Dim x As Integer
            For x = 0 To Me.Array.Length - 1
                Me.displayProductInfo(Me.Array(x))
            Next x
            Dim timeTaken As Long = Now.Ticks - currentTickCount
            Dim timeTakenInSeconds As Double = timeTaken / TimeSpan.TicksPerSecond
            Trace.WriteLine("NON-WTF TOOK : " & timeTaken & " TICKS (" & timeTakenInSeconds & ")")
        End Sub
        Private Sub displayProductInfo(ByVal text As String)
        End Sub</font>


    <font color="#008000" size="1">WTF TOOK : 468750 TICKS (0.046875)
    NON-WTF TOOK : 312500 TICKS (0.03125)
    <snip>
    WTF TOOK : 312500 TICKS (0.03125)
    NON-WTF TOOK : 312500 TICKS (0.03125)</font>


    I got completely different results using this code:

    static void Main(string[] args)
            {
                int [] test = new int[1000000];
                Random r = new Random();
                for(int i = 0; i < test.Length; i++)
                    test[i] = r.Next();
                BadLoop(test);
                GoodLoop(test);
                Console.WriteLine("Press enter to continue...");
                Console.Read();
            }

            static void BadLoop(int [] test)
            {
                DateTime before = DateTime.Now;
                try
                {
                    int idx = 0;
                    while(true)
                    {
                        DoSomething(test[idx]);
                        idx++;
                    }
                }
                catch(IndexOutOfRangeException) {}

                TimeSpan span = DateTime.Now - before;
                Console.WriteLine("Bad Loop: " + span.Milliseconds.ToString() + " milliseconds");
            }

            static void GoodLoop(int [] test)
            {
                DateTime before = DateTime.Now;
                for(int i = 0; i < test.Length; i++)
                {
                    DoSomething(test[i]);
                }
                TimeSpan span = DateTime.Now - before;
                Console.WriteLine("Good Loop: " + span.Milliseconds.ToString() + " milliseconds");
            }

            static void DoSomething(int val)
            {
                val++;
                val = val%(val/2);
            }

    A few test runs resulted in:
    Bad Loop: 718 milliseconds
    Good Loop: 15 milliseconds

    Bad Loop: 671 milliseconds
    Good Loop: 15 milliseconds

    Bad Loop: 671 milliseconds
    Good Loop: 31 milliseconds

    Bad Loop: 765 milliseconds
    Good Loop: 15 milliseconds

    It might have to do with the fact that your displayProductInfo sub doesn't actually do anything.

    PS:  I know my method of timing is not too precise, but with the large disparity in times, it is still useful.  Also, most of this is moot anyways, as the example is java and both of our examples were .NET.

  • (cs) in reply to kipthegreat
    kipthegreat:
    A Wizard A True Star:

    Everyone's assuming there should be an idx++ in there... but you know, it could be even more frightening than that. It's entirely possible that displayProductInfo(prodnums[idx]) will also delete the item from the prodnums array.



    Assuming this is Java (which it looks like to me), it is impossible for that to happen.  displayProductInfo will be given a copy of the pointer to prodnums[idx], not the same pointer that the loop uses.  The method would have no knowledge of the array its parameter comes from.  Also, arrays are not dynamic so you can't really remove anything from any array (other than setting it to null.. which, again, displayProductInfo is unable to do).

    This isn't quite right.  The thing about the reference (pointer) being a copy is correct, but there's only one array being pointed to.  It doesn't copy the array, just the pointer.  This method could change array elements.  It can't resize the array, though because the length is not mutable, that part is correct.

  • LotJ (unregistered) in reply to Travis

    Anonymous:
    Except that any decent compiler would probably take care of this detail for you...if pre versus post increment does not change the for loop whatsoever...and it provides some sort of speed benefit, then why wouldn't a compiler writer do it? 

    Not so with complex types.  Since pre and post inc can have drastically different overloaded operations, the compiler cannot call one instead of the other.

  • (cs) in reply to Rain dog
    Anonymous:
    Anonymous:
    Anonymous:
    That is almost as bad as the people who say that pre increment in a for loop is faster than post increment...ridiculous


    Uh, it is faster.  Trivially so, but when there is no downside to using pre-increment (other than learning a new style) and a minor benefit, why would you ever not?


    Except in the case of overloaded operators, the preincrement operator and post increment operators are the same now.



    Please tell me you're joking.

    int[] arr = {0, 1, 2};
    int idx = 1;
    cout << arr[idx++] << endl;   //  prints 1
    idx = 1;
    cout << arr[++idx] << endl;   //  prints 2

  • Travis (unregistered) in reply to LotJ
    Anonymous:

    Anonymous:
    Except that any decent compiler would probably take care of this detail for you...if pre versus post increment does not change the for loop whatsoever...and it provides some sort of speed benefit, then why wouldn't a compiler writer do it? 

    Not so with complex types.  Since pre and post inc can have drastically different overloaded operations, the compiler cannot call one instead of the other.

    Then changing from pre increment to post increment to gain a nanosecond speed benefit would break your code regardless....so the point is moot. Game. Set. Match.

  • NoName (unregistered) in reply to Rain dog
    Anonymous:
    Anonymous:
    Anonymous:
    That is almost as bad as the people who say that pre increment in a for loop is faster than post increment...ridiculous


    Uh, it is faster.  Trivially so, but when there is no downside to using pre-increment (other than learning a new style) and a minor benefit, why would you ever not?


    Except in the case of overloaded operators, the preincrement operator and post increment operators are the same now.


    It is all just the accumulation of urban myth, programmer voodoo, wishfull thinking, and wrongfull generalisation of special cases.

    Yes, there was a CPU/compiler combination where running a for() loop backwards was faster than running it forwards. But that doesn't mean that every for() loop should be run backwards.

    Yes, there was a CPU/compiler combination where using a float as the counter in a loop was faster than using an int. But that doesn't mean ...

    Yes, in some languages and circumstances a pre- and post-increment behave differently. But that doesn't mean ...

    Yes, there might be a language/compiler combination where an explicit array range checking might be slower than the compiler-inserted array bounds checking. But that doesn't mean ...
  • Anthony (unregistered) in reply to LotJ

    This assumes, of course, that the optimizer is stupid. The optimizer is perfectly free to simply translate both statements as 'i = i+1', since the return value is being discarded. If the return value is not being discarded (x = i++ vs x = ++i), the two code segments don't have the same effect so you can't make this substitution anyway, and in any case the actual code generated will be:
    x = i++  ==> x = i; i = i+1
    x = ++i  ==> i = i+1; x = i

    Naturally, if the compiler doesn't know how the increment operator works, it cannot do this optimization, but that's the fault of how the increment operator was written, not a specific issue of ++i vs i++.

  • (cs) in reply to Travis
    Anonymous:

    Anonymous:
    It's faster because the array library that throws the exception must already be checking whether the index is in range or not. Adding another check in application code is just duplicating work. (NB Not that I think programming this way is a good idea)

    God you people crack me up...I love when people give excuses like "it is faster" for examples like this.  If this is how someone "optimizes" code then I feel sorry for their employer and coworkers.  An extra if statement from checking if the index is past the range of elements is not going to see any noticeable performance.

    That is almost as bad as the people who say that pre increment in a for loop is faster than post increment...ridiculous



    Oh, pre increment is faster with integers in Java, here are my results over 1 billion iterations (time is in mills)...
    Timings for 1000000000 iterations:
    Post increment 3469
     Pre increment 3469

    Post increment 3484
     Pre increment 3453

    Post increment 3469
     Pre increment 3453

    Post increment 3453
     Pre increment 3453

    Post increment 3485
     Pre increment 3453

    Post increment 3453
     Pre increment 3562

    Post increment 3485
     Pre increment 3437

    Post increment 3485
     Pre increment 3468

    Post increment 3485
     Pre increment 3453

    Post increment 3469
     Pre increment 3453

    Post increment average 3473
     Pre increment average 3465

    See, 8 thousandths of a second over 10 billion iterations, it's faster, just not by much.  If this is really where people try to optimize, I wonder what happens when they have a real bottleneck.
  • (cs)

    My guess: the programmer tried

      for (idx=0; idx<prodnums.length(); idx++)
      {
        displayProductInfo(prodnums[idx]);
      }
    </pre>
    

    but that didn't compile (because the correct way is

    for (idx=0; idx<prodnums.length; idx++)
    without parenthesis for
    length
    ), so he had to choose another way.

    When he found out, it was to late... because it was in production an no-one was sure wether or not displayProductInfo() might throw an IndexOutOfBoundException, too; so removing the try-catch-block was not an option.

    Disclaimer: No, it wasn't me! Really! When I was in a similar situation, relatively new to Java and finding out that length() does not work for arrays, I used java.lang.Array.getLength() instead until I found out what went wrong.

  • (cs) in reply to NoName
    Anonymous:
    Yes, there was a CPU/compiler combination where running a for() loop backwards was faster than running it forwards.

    More than once, and as some Javascript checks would show you, it's still true in some situations.

    But that doesn't mean that every for() loop should be run backwards.

    Nope, only every for loop when you're using a badly optimizing/optimized interpreter/compiler.

  • (cs) in reply to dubwai
    dubwai:
    kipthegreat:
    A Wizard A True Star:

    Everyone's assuming there should be an idx++ in there... but you know, it could be even more frightening than that. It's entirely possible that displayProductInfo(prodnums[idx]) will also delete the item from the prodnums array.



    Assuming this is Java (which it looks like to me), it is impossible for that to happen.  displayProductInfo will be given a copy of the pointer to prodnums[idx], not the same pointer that the loop uses.  The method would have no knowledge of the array its parameter comes from.  Also, arrays are not dynamic so you can't really remove anything from any array (other than setting it to null.. which, again, displayProductInfo is unable to do).

    This isn't quite right.  The thing about the reference (pointer) being a copy is correct, but there's only one array being pointed to.  It doesn't copy the array, just the pointer.  This method could change array elements.  It can't resize the array, though because the length is not mutable, that part is correct.



    Well the method could change properties of the array elements, but the array will still reference the same object no matter what (since the method is only passed one object, not the whole array).  Which I think is what both of us meant to say.  But if the method were defined like this, for example, it would have no effect on the array:

    public void displayProductInfo(Object o) {
      o=null;
    }


  • (cs) in reply to ammoQ
    ammoQ:
    My guess: the programmer tried
      for (idx=0; idx<prodnums.length (="" idx="" {="" displayproductinfo(prodnums[idx="" }=""></prodnums.length>


    I'll try again:

    My guess: the programmer tried

      for (idx=0; idx<prodnums.length(); idx++) {
        displayproductinfo(prodnums[idx]);
      }
    
  • (cs) in reply to kipthegreat
    kipthegreat:
    dubwai:
    kipthegreat:
    A Wizard A True Star:

    Everyone's assuming there should be an idx++ in there... but you know, it could be even more frightening than that. It's entirely possible that displayProductInfo(prodnums[idx]) will also delete the item from the prodnums array.



    Assuming this is Java (which it looks like to me), it is impossible for that to happen.  displayProductInfo will be given a copy of the pointer to prodnums[idx], not the same pointer that the loop uses.  The method would have no knowledge of the array its parameter comes from.  Also, arrays are not dynamic so you can't really remove anything from any array (other than setting it to null.. which, again, displayProductInfo is unable to do).

    This isn't quite right.  The thing about the reference (pointer) being a copy is correct, but there's only one array being pointed to.  It doesn't copy the array, just the pointer.  This method could change array elements.  It can't resize the array, though because the length is not mutable, that part is correct.



    Well the method could change properties of the array elements, but the array will still reference the same object no matter what (since the method is only passed one object, not the whole array).  Which I think is what both of us meant to say.  But if the method were defined like this, for example, it would have no effect on the array:

    public void displayProductInfo(Object o) {
      o=null;
    }

    Sorry, I just realized what you meant.  I misread-into what you wrote.

  • (cs) in reply to bugsRus

    bugsRus:

    See, 8 thousandths of a second over 10 billion iterations, it's faster, just not by much.  If this is really where people try to optimize, I wonder what happens when they have a real bottleneck.

    What if you need to increment 1 billion-trillion-google times?  Huh?  What about that!

  • (cs)

    No one has yet appreciated the true subtlety of this code. For you see, it's possible that displayProductInfo is the one that is throwing the IndexOutOfBoundException!

  • (cs) in reply to Travis
    Anonymous:

    Anonymous:
    It's faster because the array library that throws the exception must already be checking whether the index is in range or not. Adding another check in application code is just duplicating work. (NB Not that I think programming this way is a good idea)

    God you people crack me up...I love when people give excuses like "it is faster" for examples like this.  If this is how someone "optimizes" code then I feel sorry for their employer and coworkers.  An extra if statement from checking if the index is past the range of elements is not going to see any noticeable performance.

    That is almost as bad as the people who say that pre increment in a for loop is faster than post increment...ridiculous

     

    I don't know too much about Java and how exceptions are handled, or how referencing x[10] when len(x) = 9 works either, but hasn't anyone here heard of

    "It's easier to ask forgiveness than permission" ?

    While it may be language dependent, replacing multiple if statements with a single try - except clause can be more efficient in terms of writing the damned stuff, and in execution. Or am I missing another wtf? If I could replace 20 conditionals with one except, would you mock me for that?

     

    (That said, I would've done this loop like - )

    int idx;
    while ( idx <= len(prodNums) ) 
    #Or however else you get the length of an array in Java
    {
    displayProd(prodNums[idx]);
    idx++;
    }
  • (cs) in reply to Graham P

    I'm not sure I understand what many of you are saying, because I don't know Java. But if we replace that while(true) with a for idx := 1 to len(array), neither the program nor we are checking bounds. So, I don't grasp that thread about the slowness of bound checking vs. exceptions. We need no checking at all. Where am I wrong?

  • Scott Vachalek (unregistered)

    This was a commonly recommended performance "enhancement" at one time in the early days of Java.  The exception handling time is constant while the check varies with the size of the array, so there is a size of the array above which it's faster to catch the exception, and below it's faster to check the counter.  Unfortunately there are approximately 0.001 Java applications in the universe where this kind of optimization will actually produce a noticeable result.  See "premature optimization". 

    Reminds me of a company I worked at once, where they were writing a custom app server.  Step 1: rewrite StringBuffer without  the synchronized keyword.  Step 20: start thinking about architecture.

  • (cs) in reply to Cyresse

    I should point out I work in Python where a try-except clause is free.

  • Ed (unregistered) in reply to Baf
    Anonymous:


    Hm.  That actually almost makes sense.  Since Java automatically bounds-checks its arrays, checking to see if the array index is in bounds yourself is redundant.  So it all comes down to whether the mechanism for throwing and catching an exception is faster than all those extra comparisons you'd be doing otherwise.

    The thing that makes me skeptical is that exceptions in early Java were really slow.  Maybe they've improved over the years; I haven't been keeping track.  But in the old days, at least, every exception was a new object that the system had to allocate and construct.



    This was actually recommended by an early Java programming book as a performance optimization. Because of the redundant check. But exception handling was and continues to be slow (especially compared to an integer comparason), so it was quickly revoked as best practice.
  • (cs) in reply to Raymond Chen
    Raymond Chen:
    No one has yet appreciated the true subtlety of this code. For you see, it's possible that displayProductInfo is the one that is throwing the IndexOutOfBoundException!


    If this is true (and assuming prodnums[idx] is a product id), then it's still a WTF, since the method would be throwing an exception about which the caller has absolutely no knowledge.  That is to say, to caller is wondering "I passed you a product id!  WhereTF is the array I overstepped?!"

    If prodnums[idx] is not a product id but, instead, some concrete type, then prodnums is a misnomer and it's still a WTF.  I'd say the author would have a hard time wiggling out of this one.
  • (cs) in reply to Cyresse
    Cyresse:
    (That said, I would've done this loop like - )
    int idx;
    while ( idx <= len(prodNums) ) 
    #Or however else you get the length of an array in Java
    {
    displayProd(prodNums[idx]);
    idx++;
    }

     

    I of course mean -

    while ( idx < len(prodNums) ) #or however the hell you get the length of an array in Java
  • (cs) in reply to Scott Vachalek
    Anonymous:

    Reminds me of a company I worked at once, where they were writing a custom app server.  Step 1: rewrite StringBuffer without  the synchronized keyword.  Step 20: start thinking about architecture.


    In Java 5.0, Sun went and did this step for you.  They created StringBuilder that meets that exact requirement (unsynchronized version of StringBuffer).

    Funny thing is, any time you have code that looks like this:
    String name = lastname + ", " + firstname;

    javac actually generates this:
    String name = (new StringBuffer(lastname)).append(", ").append(firstname).toString();

    (replace "StingBuffer" with "StringBuilder") for Java 5.0+

    So.... your company was still using StringBuffer all over the place.
  • (cs) in reply to kipthegreat
    kipthegreat:
    Anonymous:

    Reminds me of a company I worked at once, where they were writing a custom app server.  Step 1: rewrite StringBuffer without  the synchronized keyword.  Step 20: start thinking about architecture.


    In Java 5.0, Sun went and did this step for you.  They created StringBuilder that meets that exact requirement (unsynchronized version of StringBuffer).

    Funny thing is, any time you have code that looks like this:
    String name = lastname + ", " + firstname;

    javac actually generates this:
    String name = (new StringBuffer(lastname)).append(", ").append(firstname).toString();

    (replace "StingBuffer" with "StringBuilder") for Java 5.0+

    So.... your company was still using StringBuffer all over the place.


    The next thing they need to do is define a "property" keyword, that tells the compiler to generate accessors and mutators for you.  (I am aware that various IDEs can generate them for you).
  • (cs) in reply to kipthegreat

    kipthegreat:
    Anonymous:

    Reminds me of a company I worked at once, where they were writing a custom app server.  Step 1: rewrite StringBuffer without  the synchronized keyword.  Step 20: start thinking about architecture.


    In Java 5.0, Sun went and did this step for you.  They created StringBuilder that meets that exact requirement (unsynchronized version of StringBuffer).

    Funny thing is, any time you have code that looks like this:
    String name = lastname + ", " + firstname;

    javac actually generates this:
    String name = (new StringBuffer(lastname)).append(", ").append(firstname).toString();

    (replace "StingBuffer" with "StringBuilder") for Java 5.0+

    So.... your company was still using StringBuffer all over the place.

    They probably have a policy that disallows the use of the + operator for String.

  • (cs) in reply to Mung Kee

    Mung Kee:

    The next thing they need to do is define a "property" keyword, that tells the compiler to generate accessors and mutators for you.  (I am aware that various IDEs can generate them for you).

    I hope you're not serious.

  • Karl (unregistered) in reply to Danny
    [image] Anonymous wrote:
    God you people crack me up...I love when people give excuses like "it is faster" for examples like this.

    How large was your loop? The try-catch overhead is large, but happens once. The redundant check overhead happens once per array item.

    For sufficiently large loops, it's provable that this "WTF" code is faster. This limit may be larger than java's MAX_INT array size limit, however. This also assumes that the JVM doesn't perform complex analysis to remove the redundant array bounds checks.

  • (cs) in reply to Cyresse
    Cyresse:
    I should point out I work in Python where a try-except clause is free.

    Uh, no it isn't, the try clause is almost free but the except one sure isn't. It's pythonic to ask for forgiveness more than permission when you have to ask for forgiveness less than you'd ask for permission.

    dubwai:

    Mung Kee:

    The next thing they need to do is define a "property" keyword, that tells the compiler to generate accessors and mutators for you.  (I am aware that various IDEs can generate them for you).

    I hope you're not serious.

    Shouldn't he be? Python has that kind of structures, and it's much easier to work with than Java's endless, senseless and useless pages of getters and setters. Example:

    i define my class foo with a "bar" attribute. We are all consenting adults, so I just leave my attribute there

    class Foo:
        def __init__(self):
            self.bar = ""
    

    Now, by instantiating the Foo class you can freely manipulate the "bar" attribute of my objects.

    Oh, but as the program complexifies, I realise that my current implementation could be much better if I stored different values in bar. Yet I have to leave the interface as it is, since modules rely on it. That's where "property" comes into play:

    class Foo:
        def __init__(self):
            self.__bar = 0 
            # Here I redefine my old "bar" 
            # as "__bar", which is what python 
            # defines as "more or less private
            # notice that it's an int, too
        def setBar(self, v):
            # Let's do magical mumbo jumbo
            self.__bar = int(v)
        def getBar(self):
            return `self.bar`
        # and here is the magic part
        bar = property(getBar, setBar)

    Bang, I just defined a getter and a setter for my attribute without changing the interface (instanciating a "foo" object from the "Foo" class and calling foo.bar will transparently call getBar, setBar and could even call delBar if I had defined it).

    No external interface has changed yet the whole inner logic of the class could've been modified, and I went from public member to private member with getters and setters. All of this perfectly transparently.

  • Arachnid (unregistered) in reply to NoName
    Anonymous:
    Yes, there was a CPU/compiler combination where running a for() loop backwards was faster than running it forwards. But that doesn't mean that every for() loop should be run backwards.

    Actually, that's universally true. When your loop ends at 0, all that's required is a CMP operation and checking the zero bit in the flags. When your loop ends elsewhere, you have to subtract the end value from the loop and check if it's zero or positive.
  • DannyB (unregistered) in reply to Karl

    Sigh.
    You actually are missing something completely obvious.
    If he's trying to catch index out of bounds, it must throw it somewhere.
    The likely case is that it's doing an array check inside the function call to throw it.
    So he's got the overhead of both the try/catch, and the array check

  • (cs) in reply to Arachnid
    Anonymous:
    Anonymous:
    Yes, there was a CPU/compiler combination where running a for() loop backwards was faster than running it forwards. But that doesn't mean that every for() loop should be run backwards.

    Actually, that's universally true. When your loop ends at 0, all that's required is a CMP operation and checking the zero bit in the flags. When your loop ends elsewhere, you have to subtract the end value from the loop and check if it's zero or positive.

    Thing is that nowadays most compilers and interpreters innately do that very simple (yet rewarding) optimisation. Few langages still get anything from doing it manually.

  • (cs) in reply to dubwai
    dubwai:

    bugsRus:

    See, 8 thousandths of a second over 10 billion iterations, it's faster, just not by much.  If this is really where people try to optimize, I wonder what happens when they have a real bottleneck.

    What if you need to increment 1 billion-trillion-google times?  Huh?  What about that!



    Well, lets see, google (or more technically, googol) = 1e+100, trillion = 1e+13, and a billion = 1e+10, so a billion-trillion-google times = 1e+123, then we extrapolate .008 seconds and get 8e+120 = approx 2.5368e+113 years = a really, really long time. So you bet I'll use that the next time I have a loop that is that large!

  • LotJ (unregistered) in reply to Travis

    Anonymous:
    Then changing from pre increment to post increment to gain a nanosecond speed benefit would break your code regardless....so the point is moot. Game. Set. Match.

    No.

    1.  The compiler cannot switch between pre and post-inc on non-primative types because the functions being called are different.  Since the compiler cannot assume they have the same basic operations, it must use the one the programmer calls.  Assuming it can for all datatypes is stupid.

    2.  For complex data types, assignment and addition can have non-constant runtimes, meaning for large datasets the difference between pre and post (which includes the extra assignment) inc could be a matter of seconds (or more), not nanoseconds.

    3.  Assuming any undocumented feature about a datatype or function call you know nothing about is a sign of a bad programmer.  Depending on the implementation, you could be significantly increasing the runtime of your application because you blindly assume "the compiler will optimize this out."  I've seen programmer increase the complexity of an algorithm by an order of magnitude because they were too lazy to cache a variable instead of putting a function call in a conditional, which could easily happen here with complex datatypes.

  • (cs) in reply to masklinn

    Java seems to attract Exception-based logic. =D

    http://thedailywtf.com/forums/31162/ShowPost.aspx

  • Ross (unregistered) in reply to heinzkunz

    A colleague once asked if there was a handy perl function which checks if all elements of an array are equal. There isn't, but he was immediately inundated with suggestions, which was kinda unfortunate as there were three perl golfers in the (virtual) room. This was one of my nastier concoctions:

    sort { $a != $b && die } @foo

    (die throws an exception, so you may want to wrap it up in a catch block. By default exceptions in perl are fatal.)

  • (cs) in reply to foxyshadis
    foxyshadis:
    Java seems to attract Exception-based logic. =D

    http://thedailywtf.com/forums/31162/ShowPost.aspx


    Not to startup the language holy wars again, but any language can be abused.  I think this could have been done in any language that implements an exception mechanism, some easier than others I'll give you, but it's the coder not the tool.
  • greg (unregistered)

    My brain is a little fried at the moment, but isn't there also the more insiduous problem of the stack being unwound to the previous state of before the try block? If this is just printing out data then this is fine, but what if values are being altered, won't they revert back to their original value?

    I could be wrong, but I'm pretty sure I've been caught out on stuff like this in C++, albeit code that's not quite this stupid!

  • Lyndon (unregistered)

    Begin the weeping...

    http://www.jdom.org/docs/faq.html#a0300

     

Leave a comment on “Array of Hope”

Log In or post as a guest

Replying to comment #:

« Return to Article