- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
Depending on your relationship with said person, you might want to explain the difference between mean (average) and median.
Admin
Since when did the definition of 'good' become "ridiculously asinine"?
Admin
No, they are not. At least not in Java. They are thousands of times slower which still isn't much becasue conditionals are extremely fast.
Where do people get these ideas?
Admin
No, as with many things this does not apply to Java. However, given the amount of stupid stuff I've seen Java programmers do in C++, simply knowing that every language is not as braindead as Java is a step in the right direction.
Premature optimization is different than good coding habits.
Admin
It was once in some early, fscking Java tutorial. You know, such a thing where some clueless idiot thinks he made the greatest invention since sliced bread and therefore feels the need to pass such nonsense as "good practice".
Admin
You mixed the two statements. ++i is trivially faster than i++ because i++ evaluates the expression twice (before incrementation and after incrementation) while ++i only does it once (after). Then again, even Javascript doesn't yield any significant difference between both styles on 10.000.000 cycles on the 3 main browsers (Opera, Firefox and MSIE).
Cycling to 0 (for(i=max-1; i>=0; --i) instead of for(i=0; i<max; ++i), on the other hand, yields surprisingly impressive results under Firefox (50-70% increase in looping speed with firefox 1.0.6), doesn't do anything significant on Opera or MSIE (shows how unoptimized Firefox' javascript loops are)</p>
Admin
Preincrement used to be faster on certain c/c++ compilers because of various optimizations that the compiler did or did not make.
In c/c++ today however, compilers are far more advanced at optimizations than they were even 10 years ago.
Admin
No it is not faster. It prevents the hotspot-compiler from removing the array out of bounds checking.
Admin
Except in the case of overloaded operators, the preincrement operator and post increment operators are the same now.
Admin
Let's try that again
Admin
Except that any decent compiler would probably take care of this detail for you...if pre versus post increment does not change the for loop whatsoever...and it provides some sort of speed benefit, then why wouldn't a compiler writer do it?
Admin
Third time's the charm?
Admin
I got completely different results using this code:
static void Main(string[] args)
{
int [] test = new int[1000000];
Random r = new Random();
for(int i = 0; i < test.Length; i++)
test[i] = r.Next();
BadLoop(test);
GoodLoop(test);
Console.WriteLine("Press enter to continue...");
Console.Read();
}
static void BadLoop(int [] test)
{
DateTime before = DateTime.Now;
try
{
int idx = 0;
while(true)
{
DoSomething(test[idx]);
idx++;
}
}
catch(IndexOutOfRangeException) {}
TimeSpan span = DateTime.Now - before;
Console.WriteLine("Bad Loop: " + span.Milliseconds.ToString() + " milliseconds");
}
static void GoodLoop(int [] test)
{
DateTime before = DateTime.Now;
for(int i = 0; i < test.Length; i++)
{
DoSomething(test[i]);
}
TimeSpan span = DateTime.Now - before;
Console.WriteLine("Good Loop: " + span.Milliseconds.ToString() + " milliseconds");
}
static void DoSomething(int val)
{
val++;
val = val%(val/2);
}
A few test runs resulted in:
Bad Loop: 718 milliseconds
Good Loop: 15 milliseconds
Bad Loop: 671 milliseconds
Good Loop: 15 milliseconds
Bad Loop: 671 milliseconds
Good Loop: 31 milliseconds
Bad Loop: 765 milliseconds
Good Loop: 15 milliseconds
It might have to do with the fact that your displayProductInfo sub doesn't actually do anything.
PS: I know my method of timing is not too precise, but with the large disparity in times, it is still useful. Also, most of this is moot anyways, as the example is java and both of our examples were .NET.
Admin
This isn't quite right. The thing about the reference (pointer) being a copy is correct, but there's only one array being pointed to. It doesn't copy the array, just the pointer. This method could change array elements. It can't resize the array, though because the length is not mutable, that part is correct.
Admin
Not so with complex types. Since pre and post inc can have drastically different overloaded operations, the compiler cannot call one instead of the other.
Admin
Please tell me you're joking.
int[] arr = {0, 1, 2};
int idx = 1;
cout << arr[idx++] << endl; // prints 1
idx = 1;
cout << arr[++idx] << endl; // prints 2
Admin
Then changing from pre increment to post increment to gain a nanosecond speed benefit would break your code regardless....so the point is moot. Game. Set. Match.
Admin
It is all just the accumulation of urban myth, programmer voodoo, wishfull thinking, and wrongfull generalisation of special cases.
Yes, there was a CPU/compiler combination where running a for() loop backwards was faster than running it forwards. But that doesn't mean that every for() loop should be run backwards.
Yes, there was a CPU/compiler combination where using a float as the counter in a loop was faster than using an int. But that doesn't mean ...
Yes, in some languages and circumstances a pre- and post-increment behave differently. But that doesn't mean ...
Yes, there might be a language/compiler combination where an explicit array range checking might be slower than the compiler-inserted array bounds checking. But that doesn't mean ...
Admin
This assumes, of course, that the optimizer is stupid. The optimizer is perfectly free to simply translate both statements as 'i = i+1', since the return value is being discarded. If the return value is not being discarded (x = i++ vs x = ++i), the two code segments don't have the same effect so you can't make this substitution anyway, and in any case the actual code generated will be:
x = i++ ==> x = i; i = i+1
x = ++i ==> i = i+1; x = i
Naturally, if the compiler doesn't know how the increment operator works, it cannot do this optimization, but that's the fault of how the increment operator was written, not a specific issue of ++i vs i++.
Admin
Oh, pre increment is faster with integers in Java, here are my results over 1 billion iterations (time is in mills)...
Timings for 1000000000 iterations:
Post increment 3469
Pre increment 3469
Post increment 3484
Pre increment 3453
Post increment 3469
Pre increment 3453
Post increment 3453
Pre increment 3453
Post increment 3485
Pre increment 3453
Post increment 3453
Pre increment 3562
Post increment 3485
Pre increment 3437
Post increment 3485
Pre increment 3468
Post increment 3485
Pre increment 3453
Post increment 3469
Pre increment 3453
Post increment average 3473
Pre increment average 3465
See, 8 thousandths of a second over 10 billion iterations, it's faster, just not by much. If this is really where people try to optimize, I wonder what happens when they have a real bottleneck.
Admin
My guess: the programmer tried
Admin
More than once, and as some Javascript checks would show you, it's still true in some situations.
Nope, only every for loop when you're using a badly optimizing/optimized interpreter/compiler.
Admin
Well the method could change properties of the array elements, but the array will still reference the same object no matter what (since the method is only passed one object, not the whole array). Which I think is what both of us meant to say. But if the method were defined like this, for example, it would have no effect on the array:
public void displayProductInfo(Object o) {
o=null;
}
Admin
I'll try again:
My guess: the programmer tried
Admin
Sorry, I just realized what you meant. I misread-into what you wrote.
Admin
What if you need to increment 1 billion-trillion-google times? Huh? What about that!
Admin
No one has yet appreciated the true subtlety of this code. For you see, it's possible that displayProductInfo is the one that is throwing the IndexOutOfBoundException!
Admin
I don't know too much about Java and how exceptions are handled, or how referencing x[10] when len(x) = 9 works either, but hasn't anyone here heard of
"It's easier to ask forgiveness than permission" ?
While it may be language dependent, replacing multiple if statements with a single try - except clause can be more efficient in terms of writing the damned stuff, and in execution. Or am I missing another wtf? If I could replace 20 conditionals with one except, would you mock me for that?
(That said, I would've done this loop like - )
Admin
I'm not sure I understand what many of you are saying, because I don't know Java. But if we replace that while(true) with a for idx := 1 to len(array), neither the program nor we are checking bounds. So, I don't grasp that thread about the slowness of bound checking vs. exceptions. We need no checking at all. Where am I wrong?
Admin
This was a commonly recommended performance "enhancement" at one time in the early days of Java. The exception handling time is constant while the check varies with the size of the array, so there is a size of the array above which it's faster to catch the exception, and below it's faster to check the counter. Unfortunately there are approximately 0.001 Java applications in the universe where this kind of optimization will actually produce a noticeable result. See "premature optimization".
Reminds me of a company I worked at once, where they were writing a custom app server. Step 1: rewrite StringBuffer without the synchronized keyword. Step 20: start thinking about architecture.
Admin
I should point out I work in Python where a try-except clause is free.
Admin
This was actually recommended by an early Java programming book as a performance optimization. Because of the redundant check. But exception handling was and continues to be slow (especially compared to an integer comparason), so it was quickly revoked as best practice.
Admin
If this is true (and assuming prodnums[idx] is a product id), then it's still a WTF, since the method would be throwing an exception about which the caller has absolutely no knowledge. That is to say, to caller is wondering "I passed you a product id! WhereTF is the array I overstepped?!"
If prodnums[idx] is not a product id but, instead, some concrete type, then prodnums is a misnomer and it's still a WTF. I'd say the author would have a hard time wiggling out of this one.
Admin
I of course mean -
Admin
In Java 5.0, Sun went and did this step for you. They created StringBuilder that meets that exact requirement (unsynchronized version of StringBuffer).
Funny thing is, any time you have code that looks like this:
String name = lastname + ", " + firstname;
javac actually generates this:
String name = (new StringBuffer(lastname)).append(", ").append(firstname).toString();
(replace "StingBuffer" with "StringBuilder") for Java 5.0+
So.... your company was still using StringBuffer all over the place.
Admin
The next thing they need to do is define a "property" keyword, that tells the compiler to generate accessors and mutators for you. (I am aware that various IDEs can generate them for you).
Admin
They probably have a policy that disallows the use of the + operator for String.
Admin
I hope you're not serious.
Admin
How large was your loop? The try-catch overhead is large, but happens once. The redundant check overhead happens once per array item.
For sufficiently large loops, it's provable that this "WTF" code is faster. This limit may be larger than java's MAX_INT array size limit, however. This also assumes that the JVM doesn't perform complex analysis to remove the redundant array bounds checks.
Admin
Uh, no it isn't, the try clause is almost free but the except one sure isn't. It's pythonic to ask for forgiveness more than permission when you have to ask for forgiveness less than you'd ask for permission.
Shouldn't he be? Python has that kind of structures, and it's much easier to work with than Java's endless, senseless and useless pages of getters and setters. Example:
i define my class foo with a "bar" attribute. We are all consenting adults, so I just leave my attribute there
Now, by instantiating the Foo class you can freely manipulate the "bar" attribute of my objects.
Oh, but as the program complexifies, I realise that my current implementation could be much better if I stored different values in bar. Yet I have to leave the interface as it is, since modules rely on it. That's where "property" comes into play:
Bang, I just defined a getter and a setter for my attribute without changing the interface (instanciating a "foo" object from the "Foo" class and calling foo.bar will transparently call getBar, setBar and could even call delBar if I had defined it).
No external interface has changed yet the whole inner logic of the class could've been modified, and I went from public member to private member with getters and setters. All of this perfectly transparently.
Admin
Actually, that's universally true. When your loop ends at 0, all that's required is a CMP operation and checking the zero bit in the flags. When your loop ends elsewhere, you have to subtract the end value from the loop and check if it's zero or positive.
Admin
Sigh.
You actually are missing something completely obvious.
If he's trying to catch index out of bounds, it must throw it somewhere.
The likely case is that it's doing an array check inside the function call to throw it.
So he's got the overhead of both the try/catch, and the array check
Admin
Thing is that nowadays most compilers and interpreters innately do that very simple (yet rewarding) optimisation. Few langages still get anything from doing it manually.
Admin
Well, lets see, google (or more technically, googol) = 1e+100, trillion = 1e+13, and a billion = 1e+10, so a billion-trillion-google times = 1e+123, then we extrapolate .008 seconds and get 8e+120 = approx 2.5368e+113 years = a really, really long time. So you bet I'll use that the next time I have a loop that is that large!
Admin
No.
1. The compiler cannot switch between pre and post-inc on non-primative types because the functions being called are different. Since the compiler cannot assume they have the same basic operations, it must use the one the programmer calls. Assuming it can for all datatypes is stupid.
2. For complex data types, assignment and addition can have non-constant runtimes, meaning for large datasets the difference between pre and post (which includes the extra assignment) inc could be a matter of seconds (or more), not nanoseconds.
3. Assuming any undocumented feature about a datatype or function call you know nothing about is a sign of a bad programmer. Depending on the implementation, you could be significantly increasing the runtime of your application because you blindly assume "the compiler will optimize this out." I've seen programmer increase the complexity of an algorithm by an order of magnitude because they were too lazy to cache a variable instead of putting a function call in a conditional, which could easily happen here with complex datatypes.
Admin
Java seems to attract Exception-based logic. =D
http://thedailywtf.com/forums/31162/ShowPost.aspx
Admin
A colleague once asked if there was a handy perl function which checks if all elements of an array are equal. There isn't, but he was immediately inundated with suggestions, which was kinda unfortunate as there were three perl golfers in the (virtual) room. This was one of my nastier concoctions:
sort { $a != $b && die } @foo
(die throws an exception, so you may want to wrap it up in a catch block. By default exceptions in perl are fatal.)
Admin
Not to startup the language holy wars again, but any language can be abused. I think this could have been done in any language that implements an exception mechanism, some easier than others I'll give you, but it's the coder not the tool.
Admin
My brain is a little fried at the moment, but isn't there also the more insiduous problem of the stack being unwound to the previous state of before the try block? If this is just printing out data then this is fine, but what if values are being altered, won't they revert back to their original value?
I could be wrong, but I'm pretty sure I've been caught out on stuff like this in C++, albeit code that's not quite this stupid!
Admin
Begin the weeping...
http://www.jdom.org/docs/faq.html#a0300