The Speed-up Loop

  • savar 2008-01-24 10:05
    i done worked with inline assembly all my life, yessir

    i tell you what
  • Tom 2008-01-24 10:07
    It's......it's brilliant.
  • anonymous_coder() 2008-01-24 10:07
    <head-desk>
    Sounds like some of the mechanics I knew, who would break things that were almost broke to keep from having to do hard jobs - windshield wipers, mirrors, etc.

    On the other hand, it's a great way to keep a PHB happy...
  • dlikhten 2008-01-24 10:10
    Speed Loops... BRILLIANT!

    I guess that's what happens when you need to "fill the void"
  • ssprencel 2008-01-24 10:12
    That's...pretty crummy. If they would have put as much effort into creating the GUI as they did avoiding work, then we might have had a real alternative to Microsoft.
  • Tyler 2008-01-24 10:13
    Wayne is a goddamn genius.

    I do something similar at by giving out really long timelines for development and then consistently beating them by huge margins. This also gives me the benefit of extra time if something needs more work than I planned.
  • The MAZZTer 2008-01-24 10:13
    I feel dumb. I can't find the integer overflow. :(

    char *p only goes up to 0x0010423F which is far below the maximum of 0xFFFFFFFF.

    and int i only goes up to 0x000F423F which is also below the maximum of 0x7FFFFFFF.

    The only WTFs I see are a hardcoded memory address, which I suppose could be acceptable depending on what hardware and operating system the code is run under, and the setting of 1000000 bytes to 0 when there's probably a better way to do it (memset is one) although what improvements can be made depends on how the set memory is used (if it is later read without setting and the program depends on 0 for initial values then there's probably no way around it).

    [Edit: Wait... it makes more sense if those were int16s... were they?]
  • Alex G. 2008-01-24 10:15
    That is brilliant.

    Really. I'm in awe, this guy is a genius.
  • Crash Magnet 2008-01-24 10:16
    I thought the 80/20 rule was: The first 80% of the project take 20% of your time, the last 20% of the project takes 80% of the time. But I remember it as the 90/10 rule.
  • Rene 2008-01-24 10:20

    I feel dumb. I can't find the integer overflow. :(


    In more or less every DOS compiler you'll find, int defaults to short, aka a 16-bit integer. 0x000F423F > 0xFFFF
  • gabba 2008-01-24 10:21
    Building in future optimizations is a pretty standard technique. In certain organizations you simply have to prepare for defending yourself.

    Very similar to hardware engineers who purposely design in expensive components, knowing they'll be asked to go through a cost reduction phase later.

    No WTF here; just sensible self-defense.
  • smooth menthol flavor 2008-01-24 10:21
    ints were probably only two-byte words back in 1989. int i could only have gone up to 32767.
  • Benjamin Normoyle 2008-01-24 10:21
    I thought 80/20 was how to completely bullshit situations: when you need a percentage, use either 80 or 20 percent as your number. E.g. : "80 % of this department is slacking off," "20% of companies give free soda to their employees," "20% of experts say YO MAMA."
  • Tiver 2008-01-24 10:21
    The MAZZTer:
    I feel dumb. I can't find the integer overflow. :(
    <snip>
    [Edit: Wait... it makes more sense if those were int16s... were they?]


    Correct, which in the days of Windows 2, they most definitely were.
  • sweavo 2008-01-24 10:22
    Rene:

    I feel dumb. I can't find the integer overflow. :(


    In more or less every DOS compiler you'll find, int defaults to short, aka a 16-bit integer. 0x000F423F > 0xFFFF


    Don't forget it's 1987.

    Wayne is a Genius. Where is he working now? Can I apply?
  • sweavo 2008-01-24 10:25

    Don't forget it's 1987.


    oh wait, that would be three months after 1986 not 1989, hang on...

    Don't forget it's 198L
  • Keith 2008-01-24 10:29
    At last!

    Finally, an explanation of why Lotus Notes is so *&^%*@&$ SLOW!!!
  • Tann San 2008-01-24 10:35
    I had a uni lecturer explain how we should do things like this. He said it was great as a contractor as he'd get calls back from past clients asking if he could update their system to run faster, he'd do the same thing as the OP said; shorten some un-necessary purposefully placed loop, say it took 6+ hours doing it and voila, instant pay day.
  • Justin 2008-01-24 10:35
    Wow. What a BOFH, or perhaps a BPFH.
  • Anon Coward 2008-01-24 10:38
    I worked on a large consumer application that's on at least version 12 now. The code is sprinkled with sleep(10000) everywhere. Some were hold-overs from past days when you were waiting for a supposed synchronous hardware operation to complete, other times it was because the original programmer didn't understand multi-threading and IPC, so the sleep was to let another thread finish an operation or avoid hammering on a shared resource. Our 'Architect' would regularly go back and reduce the numbers and claim wonderful improvements in how fast the program now ran. Alas, his name wasn't Wayne.
  • Jay 2008-01-24 10:41
    I thought the 80/20 rule was: The first 20% of the project takes 80% of the allotted time, while the remaining 80% of the project takes another 80% of the allotted time.

    The big flaw I see to their speed-up loops is that they should have made the ending value a parameter read in from a config file or something of the sort. Then you wouldn't have to recompile to change the delay. Also, it really should check the system clock rather than just counting. That way you'd get consistent performance savings regardless of the speed of the processor.
  • Battra 2008-01-24 10:45
    I've done that myself, and it's still in production. The client complained that he couldn't see the little animated widget I'd put in for the AJAX call, so I added a .5 second delay on the server side of the AJAX call. Result, happy (but stupid) client.
  • gcc4ef3r 2008-01-24 10:51
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...
  • T $ 2008-01-24 10:54
    Can't quite figure out why the last contractor "just left all of a sudden." Did he decide he wanted a real job instead?
  • hallo.amt 2008-01-24 10:59
    not back then, optimization is a quite new field
  • Zylon 2008-01-24 11:00
    Wayne and Wane really need to work out their split personality issues.
  • Alcari 2008-01-24 11:01
    Oh, i've done that.
    Someone complained that sometimes, saving a file was to fast, so he didn't know if it was succesfull, since the statusbar popped in and out to fast.

    Solution:
    If filesize < 2mb
    open FakeSaveAnimation.gif
    end
  • VolodyA! V A 2008-01-24 11:05
    The funny for me was the google ad on this exact page:

    "Slow computer? Speed it up by cleaning your registry..." maybe there is some NumberOfZerosInSpeedupLoop in Windows registry?
  • Mick 2008-01-24 11:07
    gcc4ef3r:
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...


    I read statements like this again and again, but I'll be damned if I've ever seen a compiler do that. Ok, ok, I've only used the compiler that comes with (various versions of) Visual C++ but it is supposed to optimize some and even with full optimizations on in a release build it's never optimized anything like this away that I've found.



  • jon 2008-01-24 11:14
    Isn't the 80/20 rule... 80% of the work is done by 20% of the people? No wait, I'm thinking of the 90/10 rule.
  • SomeCoder 2008-01-24 11:14
    Mick:
    gcc4ef3r:
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...


    I read statements like this again and again, but I'll be damned if I've ever seen a compiler do that. Ok, ok, I've only used the compiler that comes with (various versions of) Visual C++ but it is supposed to optimize some and even with full optimizations on in a release build it's never optimized anything like this away that I've found.






    It will optimize it away. If you just set to Release mode in Visual Studio, that isn't full optimizations. You need to tell it that you want it to run as fast as possible too.

    One time, I tried something like this:

    while (true);

    Set the mode to Release and the optimizations to "Speed" and voila, program runs, no infinite loop.

    Back in the late 80s though, there probably weren't optimizations like this so that for loop would have happily done nothing... 1000000 times.
  • mallard 2008-01-24 11:18
    Mick:
    gcc4ef3r:
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...


    I read statements like this again and again, but I'll be damned if I've ever seen a compiler do that. Ok, ok, I've only used the compiler that comes with (various versions of) Visual C++ but it is supposed to optimize some and even with full optimizations on in a release build it's never optimized anything like this away that I've found.



    The C# compiler in VS2008 certainly doesn't. Built in release mode with optimisation:
    Source:

    static void Main(string[] args)
    {
    for (int i = 0; i < 100000; i++) ;
    Console.WriteLine("Hello World!");
    }


    Disassembly (from Lutz Roeder's .NET Reflector):

    private static void Main(string[] args)
    {
    for (int i = 0; i < 0x186a0; i++)
    {
    }
    Console.WriteLine("Hello World!");
    }
  • jtl 2008-01-24 11:23
    Alcari:
    Oh, i've done that.
    Someone complained that sometimes, saving a file was to fast, so he didn't know if it was succesfull, since the statusbar popped in and out to fast.

    Solution:
    If filesize < 2mb
    open FakeSaveAnimation.gif
    end


    Apparently giving the user a 'save complete' message was too much hassle?
  • AdT 2008-01-24 11:23
    SomeCoder:
    One time, I tried something like this:

    while (true);

    Set the mode to Release and the optimizations to "Speed" and voila, program runs, no infinite loop.

    Back in the late 80s though, there probably weren't optimizations like this so that for loop would have happily done nothing... 1000000 times.


    What you mentioned is not an optimization, it's a breaking change. Removing an infinite loop is not a speed-up, it's a major semantic change. Just imagine you had put "DeleteAllFilesInUsersHomeDir();" after the loop.

    Visual C++ is notorious for "optimizing" code in ways that break it. I encountered such cases myself and have seen co-workers run into similar issues where the Debug code ran fine and the Release version screwed up big time and disabling some optimizations resolved these issues.
  • GettinSadda 2008-01-24 11:24
    Mick:
    gcc4ef3r:
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...


    I read statements like this again and again, but I'll be damned if I've ever seen a compiler do that. Ok, ok, I've only used the compiler that comes with (various versions of) Visual C++ but it is supposed to optimize some and even with full optimizations on in a release build it's never optimized anything like this away that I've found.


    WRONG!!!

    // OptTest.cpp : Defines the entry point for the console application.
    
    //

    #include "stdafx.h"

    int _tmain(int argc, _TCHAR* argv[])
    {
    const char *Before = "Before\n";
    const char *After = "After\n";

    printf(Before);
    00401000 push offset string "Before\n" (406104h)
    00401005 call printf (40101Ah)

    int i;
    for(i=0;i<1000000;i++) {;}

    printf(After);
    0040100A push offset string "After\n" (4060FCh)
    0040100F call printf (40101Ah)
    00401014 add esp,8

    return 0;
    00401017 xor eax,eax
    }
    00401019 ret
  • ChiefCrazyTalk 2008-01-24 11:27
    Jay:
    I thought the 80/20 rule was: The first 20% of the project takes 80% of the allotted time, while the remaining 80% of the project takes another 80% of the allotted time.

    The big flaw I see to their speed-up loops is that they should have made the ending value a parameter read in from a config file or something of the sort. Then you wouldn't have to recompile to change the delay. Also, it really should check the system clock rather than just counting. That way you'd get consistent performance savings regardless of the speed of the processor.



    Ummm its the other way around.
  • Ozz 2008-01-24 11:28
    Crash Magnet:
    I thought the 80/20 rule was: The first 80% of the project take 20% of your time, the last 20% of the project takes 80% of the time. But I remember it as the 90/10 rule.

    It's actually the 90/90 rule. The first 90% of the work takes 90% of the time, and the remaining 10% of the work takes the other 90% of the time.
  • SuperousOxide 2008-01-24 11:31
    mallard:

    The C# compiler in VS2008 certainly doesn't. Built in release mode with optimisation:
    <snip>


    There's no optimization the compiler could do there. Removing the for loop would change the behavior of the program (by not printing out the "Hello World!"). If you removed the WriteLine, then it should remove the loop.
  • Matthew 2008-01-24 11:32
    Integer overflow!? How is that the "obvious" error? He had me at *p = 0x10000;. I'm sorry, but if you're picking an arbitrary memory address and writing 1 million zeros starting at that point, the LEAST of your problems will be the integer overflow of i. Hell, in a system like DOS, you may not even make it to the point of overflowing i before the system hangs because you wrote over some video buffer or something.
  • derula 2008-01-24 11:33
    TRWTF is that the article has 2^10 words.
  • AdT 2008-01-24 11:34
    SuperousOxide:
    There's no optimization the compiler could do there. Removing the for loop would change the behavior of the program (by not printing out the "Hello World!").


    No, it wouldn't. In either case, "Hello World!" is printed exactly once.
  • Ben 2008-01-24 11:35
    mallard:
    Mick:
    gcc4ef3r:
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...


    I read statements like this again and again, but I'll be damned if I've ever seen a compiler do that. Ok, ok, I've only used the compiler that comes with (various versions of) Visual C++ but it is supposed to optimize some and even with full optimizations on in a release build it's never optimized anything like this away that I've found.



    The C# compiler in VS2008 certainly doesn't. Built in release mode with optimisation:
    Source:

    static void Main(string[] args)
    {
    for (int i = 0; i < 100000; i++) ;
    Console.WriteLine("Hello World!");
    }


    Disassembly (from Lutz Roeder's .NET Reflector):

    private static void Main(string[] args)
    {
    for (int i = 0; i < 0x186a0; i++)
    {
    }
    Console.WriteLine("Hello World!");
    }


    Probably because that loop actually does something?
  • GettinSadda 2008-01-24 11:38
    Mick:
    gcc4ef3r:
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...


    I read statements like this again and again, but I'll be damned if I've ever seen a compiler do that. Ok, ok, I've only used the compiler that comes with (various versions of) Visual C++ but it is supposed to optimize some and even with full optimizations on in a release build it's never optimized anything like this away that I've found.


    WRONG TAKE 2!!!

    Same code compiled in Visual C++ 6
    00401000   push        406038h
    
    00401005 call 00401020
    0040100A push 406030h
    0040100F call 00401020
    00401014 add esp,8
    00401017 xor eax,eax
    00401019 ret
    0040101A nop
    0040101B nop
    0040101C nop
    0040101D nop
    0040101E nop
    0040101F nop
  • Ben 2008-01-24 11:38
    That is a WTF? No

    That's a GREAT IDEA
  • mallard 2008-01-24 11:39
    SuperousOxide:
    mallard:

    The C# compiler in VS2008 certainly doesn't. Built in release mode with optimisation:
    <snip>


    There's no optimization the compiler could do there. Removing the for loop would change the behavior of the program (by not printing out the "Hello World!"). If you removed the WriteLine, then it should remove the loop.


    No, the program only prints out "Hello World!" once, after the loop. Notice the ";" at the end of "for (int i = 0; i < 100000; i++) ;", that is, as the disassembly showed, an empty loop body, equivalent to "{ }".

    Besides, even with the "Console.WriteLine("Hello World!");" removed (commented out), the disassembly becomes:

    private static void Main(string[] args)
    {
    for (int i = 0; i < 0x186a0; i++)
    {
    }
    }


    The loop still remains.
  • clively 2008-01-24 11:40
    Ben:
    That is a WTF? No

    That's a GREAT IDEA


    I really hope none of you guys work for Microsoft.
  • AdT 2008-01-24 11:41
    Matthew:
    Hell, in a system like DOS, you may not even make it to the point of overflowing i before the system hangs because you wrote over some video buffer or something.


    What? I've done some video programming under DOS and if there was a memory area with fixed address that you wanted to overwrite with zeroes, it was the video RAM. And no, the system would not hang if you did this.

    The system would rather hang (or probably crash in a more spectacular way) if you overwrote some hardware interrupt handler or some code/data structures used by such a handler. You could prevent that by executing CLI before the loop. Of course that would only make any sense if you wanted to kick DOS from memory altogether, for example to load your own OS instead.
  • AdT 2008-01-24 11:46
    Ben:
    Probably because that loop actually does something?


    It's amazing how many people don't understand the C# syntax although in the case of this loop, it's absolutely identical to that of C or C++.

    This is a loop whose body consists of the empty statement. It has no effect on program semantics.
  • SomeCoder 2008-01-24 11:47
    AdT:
    SomeCoder:
    One time, I tried something like this:

    while (true);

    Set the mode to Release and the optimizations to "Speed" and voila, program runs, no infinite loop.

    Back in the late 80s though, there probably weren't optimizations like this so that for loop would have happily done nothing... 1000000 times.


    What you mentioned is not an optimization, it's a breaking change. Removing an infinite loop is not a speed-up, it's a major semantic change. Just imagine you had put "DeleteAllFilesInUsersHomeDir();" after the loop.

    Visual C++ is notorious for "optimizing" code in ways that break it. I encountered such cases myself and have seen co-workers run into similar issues where the Debug code ran fine and the Release version screwed up big time and disabling some optimizations resolved these issues.



    The point was not that the loop was infinite; it was that the loop did nothing. Consider:


    someThreadVariable = 400;

    while (someThreadVariable != 500);

    DeleteAllFilesInUsersHomeDir();


    If we aren't careful with threading, someThreadVariable could be changed by another thread. So the current thread should "pause" until the other thread changes the variable to 500.

    If you run this in debug mode, it will work as you would expect. If you turn on optimizations, Visual Studio will see this as a loop that does nothing and remove it and the "pause" that we would expect is gone.

    NOTE: I am not saying that this type of coding is a good idea! It's just an example!!
  • Intelligent Layman 2008-01-24 11:54
    > Don't forget it's 1987.

    It's not pretend-to-be-a-time-traveler day. You must really be a time traveller.
  • Hat 2008-01-24 12:03
    Which would be what the 'volatile' keyword is for.
  • notromda 2008-01-24 12:07
    Tann San:
    I had a uni lecturer explain how we should do things like this. He said it was great as a contractor as he'd get calls back from past clients asking if he could update their system to run faster, he'd do the same thing as the OP said; shorten some un-necessary purposefully placed loop, say it took 6+ hours doing it and voila, instant pay day.


    Wow. See, this is why computer science needs a professional "engineering" association, which includes ethics in the prerequisites. It exists in other fields, why not in computer programming?
  • Mike 2008-01-24 12:09
    Err, maybe I'm missing something--but given the 16-bit overflow, wouldn't the "speed-up loop" be infinite? Or is that the real WTF?
  • Teh Optermizar 2008-01-24 12:12
    Ben:
    Probably because that loop actually does something?


    Ben, read each character on the 'for' loop line very carefully... the trailing semi-colon is the catch.

    Failing that... curly brace matching on the reflected code FTW!!

  • ac 2008-01-24 12:12
    A lot of optimising compilers probably have specific code for recognising a completely empty loop and _not_ removing it on the grounds that it's a common (albeit broken) idiom for a delay, and people would grumble if their software stopped working.
  • Nulla 2008-01-24 12:13
    Matthew:
    Integer overflow!? How is that the "obvious" error? He had me at *p = 0x10000;. I'm sorry, but if you're picking an arbitrary memory address and writing 1 million zeros starting at that point, the LEAST of your problems will be the integer overflow of i. Hell, in a system like DOS, you may not even make it to the point of overflowing i before the system hangs because you wrote over some video buffer or something.

    Clever remark, however writing over the video buffer is perfectly valid in DOS, and will certainly not hang the system. In fact, that's exactly how you would change what is displayed.
  • Teh Optermizar 2008-01-24 12:15
    notromda:
    Tann San:
    I had a uni lecturer explain how we should do things like this. He said it was great as a contractor as he'd get calls back from past clients asking if he could update their system to run faster, he'd do the same thing as the OP said; shorten some un-necessary purposefully placed loop, say it took 6+ hours doing it and voila, instant pay day.


    Wow. See, this is why computer science needs a professional "engineering" association, which includes ethics in the prerequisites. It exists in other fields, why not in computer programming?


    This may be a very naive point of view, but I agree with your sentiment... intentionally crippling a solution for the purposes of future reward is a bit too morally corrupt for my tastes... transparency and honesty are my watchwords as a contractor, and it has never steered me wrong
  • Micirio 2008-01-24 12:17
    Maybe i isn't an int.
  • Teh Optermizar 2008-01-24 12:18
    derula:
    TRWTF is that the article has 2^10 words.


    You are truly evil... I couldn't resist the urge to paste this into Word to see if you were serious... curse ye!
  • Matthew 2008-01-24 12:29
    AdT:
    What? I've done some video programming under DOS and if there was a memory area with fixed address that you wanted to overwrite with zeroes, it was the video RAM. And no, the system would not hang if you did this.


    Sorry, I just picked some random thing that could happen if you wrote to arbitrary areas of memory in a system without memory protection. You're right, of course. Writing to a video buffer won't crash the system. It might, however, make it difficult to see the results of overflowing the integer though.

    Genius programmer: "Damn it! Every time I run this program the video goes blank and I can't read the error message. Oh! I know, I'll just log the error it to a file!

    *picks random sector of disk and writes debugging information before and after the loop*

    My point is that the OBVIOUS error in that code sample was picking a seemingly arbitrary point in memory to write zeros. The integer overflow was actually rather subtle.

  • vt_mruhlin 2008-01-24 12:29
    After giving a brief tour of the MTWS and the logistics application, Ben left for the day...


    His first day and he's already giving tours?!
  • B 2008-01-24 12:34
    Teh Optermizar:
    Ben:
    Probably because that loop actually does something?


    Ben, read each character on the 'for' loop line very carefully... the trailing semi-colon is the catch.

    Failing that... curly brace matching on the reflected code FTW!!



    Yeah, I was a bit hasty on that one. At least I wasn't the only one though :)
  • Chris 2008-01-24 12:38
    Re: the 80/20 rule... that reminds me of the worst project I've ever worked on. The whole thing was designed in two weeks by the management, and then they gave the design to we programmers. I should have run for the hills right then...

    So anyway, when we said that there was no way we could build the system in their time-frame (about six months), they invoked the 80/20 rule. They said, 80% of our users will only need 20% of the functionality, so build that first. When we pointed out that the system would be a total disaster unless we accounted for the other, complicated, 20% of users from the start, they said "don't worry about that, just build the code for the easy users first, then add on the extra stuff later".

    The project was a total disaster. Version 1.0 was released, but the users were told not to use it. The managers got their bonuses - ker-ching. The whole thing was scrapped a few months later.
  • JonC 2008-01-24 12:42
    Teh Optermizar:
    derula:
    TRWTF is that the article has 2^10 words.


    You are truly evil... I couldn't resist the urge to paste this into Word to see if you were serious... curse ye!


    It tells you that it's 1024 words on the front page
  • Anon 2008-01-24 12:47
    Benjamin Normoyle:
    I thought 80/20 was how to completely bullshit situations: when you need a percentage, use either 80 or 20 percent as your number. E.g. : "80 % of this department is slacking off," "20% of companies give free soda to their employees," "20% of experts say YO MAMA."


    I think only 80% of people define it that way...or was it 20%?
  • Richard Sargent 2008-01-24 12:52
    hallo.amt:
    not back then, optimization is a quite new field


    I hope you are kidding! Check out the article date on http://portal.acm.org/citation.cfm?id=365000.

    I agree substantial advances have occurred in optimizations since 1989, but it was hardly "new".
  • tezoatlipoca 2008-01-24 12:56
    JonC:
    Teh Optermizar:
    derula:
    TRWTF is that the article has 2^10 words.


    You are truly evil... I couldn't resist the urge to paste this into Word to see if you were serious... curse ye!


    It tells you that it's 1024 words on the front page

    Where? Have I been staring at code too long or can I blame IE for rendering WTF poorly?
  • vt_mruhlin 2008-01-24 12:56
    I always looked at the 80/20 rule as Scotty's Law...

    Scotty: How long did you tell the captain the deflector work would take us?
    LaForge: Two hours and we have barely more than that till the Borg cube arrives.
    Scotty: And how long will it actually take us?
    LaForge: Two hours-like I told him.
    Scotty: Geordi, ye've got a lot to learn. You never tell captains how long it will really take. How do you expect to earn a reputation as a miracle worker that way?
  • Zylon 2008-01-24 13:03
    Since nobody has actually posted the correct definition of the 80/20 rule yet--
    The Pareto principle (also known as the 80-20 rule, the law of the vital few and the principle of factor sparsity) states that, for many events, 80% of the effects comes from 20% of the causes. Business management thinker Joseph M. Juran suggested the principle and named it after Italian economist Vilfredo Pareto, who observed that 80% of income in Italy went to 20% of the population. It is a common rule of thumb in business; e.g., "80% of your sales comes from 20% of your clients."

    So for example, in any given programming shop, 80% of the usable work might be generated by only 20% of the programmers.

    The example that "Wayne" gives appears to be a mangling of Westheimer's Rule:
    To estimate the time it takes to do a task: estimate the time you think it should take, multiply by 2, and change the unit of measure to the next highest unit. Thus we allocate 2 days for a one hour task.
  • Teh Optermizar 2008-01-24 13:03
    JonC:
    Teh Optermizar:
    derula:
    TRWTF is that the article has 2^10 words.


    You are truly evil... I couldn't resist the urge to paste this into Word to see if you were serious... curse ye!


    It tells you that it's 1024 words on the front page


    Aha... I came in via a direct link in the RSS feed, so I didn't see that... its only 892 according to Word (904 if you include the title and by-line gubbins)

    Obviously this is an indication that Microsoft is short-changing me on words... bastards... :P
  • Lobster 2008-01-24 13:06
    tezoatlipoca:
    JonC:
    Teh Optermizar:
    derula:
    TRWTF is that the article has 2^10 words.


    You are truly evil... I couldn't resist the urge to paste this into Word to see if you were serious... curse ye!


    It tells you that it's 1024 words on the front page

    Where? Have I been staring at code too long or can I blame IE for rendering WTF poorly?


    Blaming IE is always a good idea.

    But seriously the word-count is located right of the "Full Article" link on the FRONTpage (not the article page)
  • joe 2008-01-24 13:19
    The traditional 80/20 rule is: 80% of the coding is done in 20% of the time, and the last 20% is done in 80% of the time (or more).
  • Da Koochman 2008-01-24 13:35
    Honestly, thats brilliant!
  • foo 2008-01-24 13:37
    Mike:
    Err, maybe I'm missing something--but given the 16-bit overflow, wouldn't the "speed-up loop" be infinite? Or is that the real WTF?


    On a 16-bit machine the loop should run 16960 times.
  • fist-poster 2008-01-24 13:40
    Ok, if there's an int overflow, how would the loop even compile or how many times it would run if it compiled?

    for(i=0;i<1000000;i++) {;}

    So the fix would be to use nested speed-up loops?

    Could you post the codes because I need too many iterations to speed up my programs.

  • elias 2008-01-24 13:42
    notromda:
    Tann San:
    I had a uni lecturer explain how we should do things like this. He said it was great as a contractor as he'd get calls back from past clients asking if he could update their system to run faster, he'd do the same thing as the OP said; shorten some un-necessary purposefully placed loop, say it took 6+ hours doing it and voila, instant pay day.


    Wow. See, this is why computer science needs a professional "engineering" association, which includes ethics in the prerequisites. It exists in other fields, why not in computer programming?

    I have a BS in Software Engineering and we had a class that was prerequisite for pretty much everything which had computer science ethics as a good chunk of its curriculum.
  • Zylon 2008-01-24 13:43
    joe:
    The traditional 80/20 rule is: 80% of the coding is done in 20% of the time, and the last 20% is done in 80% of the time (or more).

    Also wrong. Your "80/20" rule is a mangling of the 90-90 rule--
    Tom Cargill:
    The first 90% of the code accounts for the first 90% of the development time. The remaining 10% of the code accounts for the other 90% of the development time.

    Ninety-ninety rule
  • Steve 2008-01-24 13:43


    someThreadVariable = 400;
    while (someThreadVariable != 500);
    DeleteAllFilesInUsersHomeDir();



    The compiler is quite right to optimise the loop test away in this case. If you don't want it too you have to declare someThreadVariable volatile. A common misconception about C is that it's just a slightly advanced assembler, whereas the compiler is actually allowed to rearrange the code to it's hearts content. So long as it still appears to do what the code says, and all accesses to volatile variables and extern functions occur in the precise places they do in the code.
  • Ben Tremblay 2008-01-24 13:44
    What pains me is that so very very few folk know how Byzantine some work situations really are. I mean, you have to have been there at least once in your life ... straight out of Kafka's "The Kastle."
    This is a great story.

    I've been pimping it as an alt interpretation of 80/20, but here's my alt: you get 80% of the buzz by doing 20% of the heavy lifting; or putting it otherwise, with 20% of the effort you can cobble something together that provides 80% service to 80% of the users in 80% of situation. So where's the incentive to actually finish anything?

    My sympathy to anybody with actual needs and requirements, I mean more than the need for plausible deniability. Most software I'm seeing seems designed as entertainment ... "constant partial attention" and "social objects", the stuff of mass delusion.

    --bentrem

    p.s.1 "The corruption of the best is the worst."
    p.s.2 "Incompetence is the thin edge of a wedge called corruption."
  • Mirko 2008-01-24 13:44
    Don't forget about the sign bit so you only get half the range.
  • Sean 2008-01-24 13:48
    An int is 32 bits on a 32bit machine. The 386 came out in 1985, it's very likely that in 1990 they were using 386s and not 286s... therefore it's not an overflow caused by a 16bit int.
  • Patrick 2008-01-24 13:49
    One coworker suggested the following way to estimate projects (I'm not sure if he was serious or not!)

    Take your best guess, multiply by 2, add 1, and move to the next higher unit of measurement.

    So, if you think it's two hours, estimate 5 days.
  • Andrew 2008-01-24 13:50
    ssprencel:
    That's...pretty crummy. If they would have put as much effort into creating the GUI as they did avoiding work, then we might have had a real alternative to Microsoft.


    Apple Macintosh, X-Windows, & SmallTalk all had mature GUI systems by 1989 (or 1987 as some comments mark the story). Microsoft Windows is, by far, the latest entry to the GUI world.*

    People bought Microsoft Windows because, like MS-DOS, it was cheaply bundled with PCs. VHS beat out Betamax on a similar price issue. Sometimes shit floats to the top.

    * NOTE: Although NextStep and Mac OSX are newer than MS Windows, they are really offshoots of Apple Macintosh.
  • Patrick 2008-01-24 13:56
    Mick:
    gcc4ef3r:
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...


    I read statements like this again and again, but I'll be damned if I've ever seen a compiler do that. Ok, ok, I've only used the compiler that comes with (various versions of) Visual C++ but it is supposed to optimize some and even with full optimizations on in a release build it's never optimized anything like this away that I've found.



    Let's assume it did. I fyou made the variable volatile, it would guarantee that it has to be compiled.
  • optimized away? 2008-01-24 13:56
    > for(i=0;i<1000000;i++) {;}

    Not sure how much much impact this would have on the processing time though. I guess a modern compiler would kind of optimize this it away?

    -{II}-
  • jd 2008-01-24 14:02
    Yes, but it's like any other rule-of-thumb: useful to know, but if that kind of thing is *all* you know, *then* you're just bullshitting.
  • Office Space 2008-01-24 14:06
    Isn't Initech the name of the company in the movie Office Space?
  • gwenhwyfaer 2008-01-24 14:10
    In more or less every DOS compiler you'll find, int defaults to short, aka a 16-bit integer. 0x000F423F > 0xFFFF


    So it's kind of unfortunate that the pointer 0x10000 wouldn't fit in a 16-bit pointer address (which DOS compilers also have to suffer under) either... 0x1:0 maybe, if it's compiled Huge.
  • me too 2008-01-24 14:16
    IAWTC.
  • Jason 2008-01-24 14:20
    Battra:
    I've done that myself, and it's still in production. The client complained that he couldn't see the little animated widget I'd put in for the AJAX call, so I added a .5 second delay on the server side of the AJAX call. Result, happy (but stupid) client.


    Actually, in terms of usability, this is a fairly common technique and there is some psychology behind it. When performing an AJAX operation that saves data, for example, users tends to disbelieve that it could have possibly saved in such a short amount of time. The calls tend to be almost instant. Thus, they'll click several times just "to make sure."

    Users are (and I'm generalizing hardcore here) still conditioned to believe that the form they just spent a full minute filling out needs at least a few seconds to save. They want to see something happening. So, a half second pause to display a "Saving.." animation always does the trick. It soothes the users preconceived notion about how this stuff should work and actually looks nice by giving the visual queue. So, I wouldn't quite call that a WTF.

    Besides, as per Wayne, when your happy (but conditioned) client gets annoyed that it takes too long to save, you can boast major speed improvements with ease!
  • MadeItUp 2008-01-24 14:20
    Zylon:
    Since nobody has actually posted the correct definition of the 80/20 rule yet--
    The Pareto principle (also known as the 80-20 rule, the law of the vital few and the principle of factor sparsity) states that, for many events, 80% of the effects comes from 20% of the causes. Business management thinker Joseph M. Juran suggested the principle and named it after Italian economist Vilfredo Pareto, who observed that 80% of income in Italy went to 20% of the population. It is a common rule of thumb in business; e.g., "80% of your sales comes from 20% of your clients."

    So for example, in any given programming shop, 80% of the usable work might be generated by only 20% of the programmers.

    The example that "Wayne" gives appears to be a mangling of Westheimer's Rule:
    To estimate the time it takes to do a task: estimate the time you think it should take, multiply by 2, and change the unit of measure to the next highest unit. Thus we allocate 2 days for a one hour task.


    And I thought the 80/20 rule was that 80% of users only use 20% of the features, but a different 20%
  • Charles 2008-01-24 14:33


    int i;
    char *p = 0x10000;
    for (i = 0; i < 1000000;i++)
    {
    *p++ = 0;
    }


    I thought the bug was that it writes into unallocated memory.
  • Spectre 2008-01-24 14:41
    Hat:
    Which would be what the 'volatile' keyword is for.


    Nobody says it was unused.

    fist-poster:

    Ok, if there's an int overflow, how would the loop even compile or how many times it would run if it compiled?

    for(i=0;i<1000000;i++) {;}

    So the fix would be to use nested speed-up loops?


    Nobody says it was an int, either. The fix would obviously to use long.

    Sean:

    An int is 32 bits on a 32bit machine. The 386 came out in 1985, it's very likely that in 1990 they were using 386s and not 286s... therefore it's not an overflow caused by a 16bit int.


    An int is what the compiler defines it to be. The only guarantee is that int can represent numbers from -32767 to +32767.
  • lenny 2008-01-24 14:50
    int i;
    char *p = 0x10000;
    for (i = 0; i < 1000000;i++)
    {
    *p++ = 0;
    }

    uhmm, maybe i'm missing something, or back in the 80's the program could just set zeroes to a megabyte of memory it didn't allocate?
  • Carnildo 2008-01-24 14:59
    gcc4ef3r:
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...


    1989. Optimizing compilers were just barely better than non-optimizing ones, and far worse than hand-optimization.
  • Carnildo 2008-01-24 15:03
    lenny:
    int i;
    char *p = 0x10000;
    for (i = 0; i < 1000000;i++)
    {
    *p++ = 0;
    }

    uhmm, maybe i'm missing something, or back in the 80's the program could just set zeroes to a megabyte of memory it didn't allocate?


    MS-DOS has absolutely no memory protection, and memory allocation is just a courtesy.
  • Carnildo 2008-01-24 15:07
    Spectre:
    An int is what the compiler defines it to be. The only guarantee is that int can represent numbers from -32767 to +32767.


    Only if you're using C++ or a recent version of the C standard. The original guarantee is that:

    sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)

    with additional guarantees of

    sizeof(void *) == sizeof(int)
    sizeof(unsigned x) == sizeof(signed x)

    and that a char is large enough to store a single text character in the platform's preferred text encoding. On a machine using 7-bit ASCII, you could theoretically have an int with a range of -64 to 63.
  • swei 2008-01-24 15:11
    The 80/20 rule is a hypothesis (or idiom, not sure what to call it) that 20% of your software bug will take up 80% of your time. Or something like that.
  • Old fart 2008-01-24 15:17
    Reminds me of the days of IBM mainframe programming with CICS back in the '70s. The CICS programmers would introduce a throttle in new applications to increase response time to a few seconds on each transaction. As the CICS region became more overloaded during the subsequent months with the increase in traffic and additional applications, the response time would degrade. So they would just reduce the throttle to maintain the expected response time.

    This would go on until the system was saturated and no more reductions were available. Then they would upgrade the hardware and start all over again.
  • Zylon 2008-01-24 15:17
    swei:
    The 80/20 rule is a hypothesis (or idiom, not sure what to call it) that 20% of your software bug will take up 80% of your time. Or something like that.

    I swear to god, I must be posting in INVISIBLE MODE or something.
  • Hans Meine 2008-01-24 15:17
    Am I the only one who had the strong feeling that this story is really special somehow?

    After a lot of pondering, I now know what's so striking about it: Those people have working code!

    I mean - lucky Ben does not have to dig through the ugliest of all codes, he does not even have to do a lot of maintenance, that sounds like really nice code!

    Other facts that are quite uncommon here on theDailyWTF:
    * A job interview that does not last long, but not because of some dumb-ass candidate or interviewer!
    * A project leader who can actually answer the question what the purpose of a single code statement is, even immediately!

    TRWTF is that today, there is none.
  • al 2008-01-24 15:26
    int = -2^16 to 2^16 (-1)
    100000 > 2^16
  • BeerWineLiquor 2008-01-24 15:38
    I thought it meant rewriting 80% of the shitty outsourced code by the 20% of original staff left with jobs...
  • Fedaykin 2008-01-24 15:39
    A lot of what is sorta ETF in here is really just a perversion of good practices.

    Padding a time estimate is always the best option. Being able to go to a client/boss and say "Hey, we got done early!" is a lot better than "oh shit, we need more time!". The problem, of course, is the extreme abuse of this.

    KIRK: Scotty, "Do you always multiply your repair estimates by a factor of four?"

    SCOTTY: "How else to maintain my reputation as a miracle worker?"
  • mallard 2008-01-24 15:47
    Sean:
    An int is 32 bits on a 32bit machine. The 386 came out in 1985, it's very likely that in 1990 they were using 386s and not 286s... therefore it's not an overflow caused by a 16bit int.


    But the first mainstream 32-bit* OS (for PCs) didn't arrive until 1995. Up until them 386's (and 486's) spent the vast majority of their time emulating slightly improved 286's.

    *Windows 95 was more of a 16/32-bit hybrid, but it ran 16-bit code in V86 mode, meaning that the processor spent most of it's time in a mode that was at least partially 32-bit.
  • Joon 2008-01-24 15:47
    Tyler:
    Wayne is a goddamn genius.

    I do something similar at by giving out really long timelines for development and then consistently beating them by huge margins. This also gives me the benefit of extra time if something needs more work than I planned.


    I hate to break it to you, but any competent project manager would have realized long ago that you over-estimate ridiculously, and would not take that into account when planning entire projects
  • OnlyMe 2008-01-24 15:53
    Where I used to work we used to joke doing this very thing!
  • Chmeee 2008-01-24 15:58
    GCC will not optimize away the empty loop. It is documented as such, because GCC assumes that such a loop is purely for delay purposes, so optimizing it away is undesirable.
  • Spectre 2008-01-24 16:07
    Carnildo:
    Spectre:
    An int is what the compiler defines it to be. The only guarantee is that int can represent numbers from -32767 to +32767.


    Only if you're using C++ or a recent version of the C standard. The original guarantee is that:

    sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)

    with additional guarantees of

    sizeof(void *) == sizeof(int)
    sizeof(unsigned x) == sizeof(signed x)

    and that a char is large enough to store a single text character in the platform's preferred text encoding. On a machine using 7-bit ASCII, you could theoretically have an int with a range of -64 to 63.


    You're right, I looked it up in C99.
    But the point is still true, isn't it?
  • Eric M 2008-01-24 16:09
    80/20 Rule, otherwise known as the Pareto Principle: http://en.wikipedia.org/wiki/Pareto_principle

    Interesting take on job security. Of course now-a-days, I don't think you could find a programming job in the US that you're not on a team that is completely understaffed for the project or projects it has.
  • Rodyland 2008-01-24 16:10
    The real WTF is.... that there is no WTF, this guy is a genius!
  • akatherder 2008-01-24 16:28
    Zylon:

    The example that "Wayne" gives appears to be a mangling of Westheimer's Rule:
    To estimate the time it takes to do a task: estimate the time you think it should take, multiply by 2, and change the unit of measure to the next highest unit. Thus we allocate 2 days for a one hour task.


    That would be great:

    A: Hey how long does it take to count to 10 seconds?
    B: Oh, about 20 minutes, why?
  • Edward Royce 2008-01-24 16:31
    Hmmmm.

    This is why I always include something very obviously wrong in the UI of any application I'm trying to get accepted by management. For some odd reason management types simply cannot accept a program as-is. Instead they all absolutely must make some sort of change to the program or else the program simply isn't "theirs".

    So I include obvious UI defects that they can easily see and order changed. Otherwise they'll really muck up a program and make life a lot harder.
  • Jon 2008-01-24 16:33
    gcc4ef3r:
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...
    GCC, for example, deliberately doesn't optimise away empty loops. The only reason someone would write an empty loop is to introduce a delay.
  • SuperousOxide 2008-01-24 16:35
    Edward Royce:
    Hmmmm.

    This is why I always include something very obviously wrong in the UI of any application I'm trying to get accepted by management. For some odd reason management types simply cannot accept a program as-is. Instead they all absolutely must make some sort of change to the program or else the program simply isn't "theirs".


    Unless, of course, the "app" is just a cheap mock-up. Then management is immediately convinced the product is perfect-as-is
    and ready to ship, regardless of how buggy and unstable you try to
    convince them it is.
  • dextron 2008-01-24 16:41
    The Real WTF:

    Why was he in such a giant hurry to hire this fellow, when things were so...calm.

    Captcha: oppeto, Pinocchio's little sapling?
  • akatherder 2008-01-24 16:42
    Jason:

    When performing an AJAX operation that saves data, for example, users tends to disbelieve that it could have possibly saved in such a short amount of time. The calls tend to be almost instant. Thus, they'll click several times just "to make sure."


    1. Onclick you disable the button so they can't hammer on it.
    2. Show an animation if you wish.
    3. Show a "YOUR REQUEST SUCCEEDED, CONGRATULATIONS!!!" message.
    4. Re-enable the button.
  • cconroy 2008-01-24 16:51
    Office Space:
    Isn't Initech the name of the company in the movie Office Space?


    No.
  • phaedrus 2008-01-24 17:05
    Zylon:
    swei:
    The 80/20 rule is a hypothesis (or idiom, not sure what to call it) that 20% of your software bug will take up 80% of your time. Or something like that.

    I swear to god, I must be posting in INVISIBLE MODE or something.


    Now calm down, friend. It's not you. Everyone else seems to be posting in TLDR mode (default on /.). I've read about three or four different comments ten times over when reading the posts here.
  • Robert Watkins 2008-01-24 17:06
    I'm not a DOS guy, but wouldn't it default to a signed short - ie MAX_INT will be 0XEFFF (2^15 - 1)?
  • Robert 2008-01-24 17:14
    Benjamin Normoyle:
    I thought 80/20 was how to completely bullshit situations: when you need a percentage, use either 80 or 20 percent as your number.


    No, the 80/20 rule is that 80% of your resources will be consumed by 20% of the project. Or, 80% of your sales will come from 20% of your products. Etc.

  • excession 2008-01-24 17:21
    80/20 rule.

    The first 80% of the project takes 80% of the time.
    The Last 20% of the project takes the other 80% of the time.

    Unless it's a TCP/IP port allocation scheme involving the universal firewall hole, and the HMP port.

    Ex
  • AcidBlues 2008-01-24 17:23
    This is NOT a WTF. This is actually one of the most beatiful, creative and intelligent way for techies to manage higher levels.

    This should be put on a poster on all the office walls. Would remind managers that they are essentially inept at understanding developers and that the most important thing they need to build with its team is trust.
  • vt_mruhlin 2008-01-24 17:28
    notromda:
    Tann San:
    I had a uni lecturer explain how we should do things like this. He said it was great as a contractor as he'd get calls back from past clients asking if he could update their system to run faster, he'd do the same thing as the OP said; shorten some un-necessary purposefully placed loop, say it took 6+ hours doing it and voila, instant pay day.


    Wow. See, this is why computer science needs a professional "engineering" association, which includes ethics in the prerequisites. It exists in other fields, why not in computer programming?


    Virginia Tech had a required "professionalism in computing" class that was supposed to cover that.. but it was more focused on ettiquette lessons like "'Business casual' dress code does not include sweatpants".
  • vt_mruhlin 2008-01-24 17:37
    Joon:
    Tyler:
    Wayne is a goddamn genius.

    I do something similar at by giving out really long timelines for development and then consistently beating them by huge margins. This also gives me the benefit of extra time if something needs more work than I planned.


    I hate to break it to you, but any competent project manager would have realized long ago that you over-estimate ridiculously, and would not take that into account when planning entire projects


    It's a circular argument. Everybody else on the team overestimates, so they assume you do to. Say it'll take 2 weekss? They think it'll take 5 minutes, and will therefore allocate 3 minutes to the project plan.

    The worst is when they have a fixed deadline without enough time allocated to get it done, then they ask you for estimates. "Well, I'll either have it done by Thursday or I won't be working here anymore, so I'd say Thursday is my estimate. Now you wanna stop wasting my time so I can get back to work?"
  • Peter 2008-01-24 17:46
    80/20 is the pareto rule.

    http://en.wikipedia.org/wiki/Pareto_principle
  • Spectre 2008-01-24 17:49
    Jon:
    GCC, for example, deliberately doesn't optimise away empty loops. The only reason someone would write an empty loop is to introduce a delay.


    Not quite. For example:


    for (i = 0; i < gazillion; ++i)
    {
    #ifdef WEIRD_MACRO
    /* Do weird stuff */
    #endif
    #ifdef EVEN_WEIRDER_MACRO
    /* Do even weirder stuff */
    #endif
    }



    If I don't define any weird macros then the loop is useless.

    If you want busy wait, just use the volatile modifier.
  • Toby 2008-01-24 17:50
    I was always told that 80/20 rule means that in any given project the first 80% of the work only takes 20% of the time. It takes a lot longer to do that last 20% of the work that gets the project working at 100%
  • AdT 2008-01-24 17:52
    SomeCoder:
    The point was not that the loop was infinite; it was that the loop did nothing. Consider:


    someThreadVariable = 400;

    while (someThreadVariable != 500);

    DeleteAllFilesInUsersHomeDir();


    If we aren't careful with threading, someThreadVariable could be changed by another thread. So the current thread should "pause" until the other thread changes the variable to 500.

    If you run this in debug mode, it will work as you would expect. If you turn on optimizations, Visual Studio will see this as a loop that does nothing and remove it and the "pause" that we would expect is gone.


    I disagree. This loop does not "do nothing" - it prevents the execution of the code following it until some condition is met. In this case, the condition is !(someThreadVariable != 500), or someThreadVariable == 500 for short. If Visual C++ removes the loop in Release configuration, this breaks the semantics.

    Others have - incorrectly - claimed that the compiler may do this because someThreadVariable is not volatile. The lack of volatile allows the compiler to make assumptions about the state of someThreadVariable in the absence of threads, interrupts or other sorts of state change not effected by local code flow (in one word: asynchronous change). But in the case of your example, this only means that the compiler may replace the loop condition by "true", turning the loop into an infinite loop as without asynchronous changes of someThreadVariable, the condition will always be true.

    Changing the loop condition to "false", which is equivalent to what Visual C++ does, is, however, invalid with or without a volatile qualifier on someThreadVariable.
  • _js_ 2008-01-24 17:56
    phaedrus:
    swei:
    The 80/20 rule is a hypothesis (or idiom, not sure what to call it) that 20% of your software bug will take up 80% of your time. Or something like that.


    Now calm down, friend. It's not you. Everyone else seems to be posting in TLDR mode (default on /.). I've read about three or four different comments ten times over when reading the posts here.
    Why tell him to calm down? He didn't seem particularly upset...
  • Vilfredo Pareto 2008-01-24 18:10
    No bull shit, it's how everything in the world is distributed, from water in drops (80% of the water is in 20% of the drops), to income distribution, customers (20% of your clients probably generate 80% of your profit) etc....

    http://en.wikipedia.org/wiki/Pareto_principle


  • schnitzi 2008-01-24 18:21
    I would be very surprised if there was a compiler in the world that does that optimization. Lots of people use loops like that for delays. It would be a big presumption on the part of the compiler to disallow it.

    Not only that, the loop changes the value of i.
  • phaedrus 2008-01-24 18:44
    _js_:
    phaedrus:
    swei:
    The 80/20 rule is a hypothesis (or idiom, not sure what to call it) that 20% of your software bug will take up 80% of your time. Or something like that.


    Now calm down, friend. It's not you. Everyone else seems to be posting in TLDR mode (default on /.). I've read about three or four different comments ten times over when reading the posts here.
    Why tell him to calm down? He didn't seem particularly upset...


    Damn. I concede.
  • Kneo 2008-01-24 18:49
    The 80/20 rule applies like this:
    80% of the profit in my store comes from 20% of the items sold.
    80% of the work done with my app excercises 20% of the code.
    80% of the population owns 20% of the wealth.
    In most companies 80% of the work is done by 20% of the staff...



  • [ICR] 2008-01-24 19:26
    afaik Java is one of the only language to do the sort of loop optimisation where you remove "ineffective" loops and even then it doesn't do it anymore.

    And the .NET comparison isn't really correct, because a lot of optimisation goes on in the JIT compiler (as Java's did in this case). All you're doing is comparing the CIL which is only half optimised.
  • chrismcb 2008-01-24 19:48
    Alcari:
    Oh, i've done that.
    Someone complained that sometimes, saving a file was to fast, so he didn't know if it was succesfull, since the statusbar popped in and out to fast.

    Solution:
    If filesize < 2mb
    open FakeSaveAnimation.gif
    end


    There is an ENORMOUS difference between slowing things down for the sake of making you and your department look better, and slowing things down to let the user know something happened.

  • tikva 2008-01-24 22:00
    Carnildo:

    with additional guarantees of

    sizeof(void *) == sizeof(int)


    You must be kidding...
  • Zygo 2008-01-24 22:52
    Chmeee:
    GCC will not optimize away the empty loop. It is documented as such, because GCC assumes that such a loop is purely for delay purposes, so optimizing it away is undesirable.


    GCC 4.1 (and possibly earlier versions) does optimize away loops which can be proven to have no side effects visible to standard-compliant programs from outside of the loop. This can remove significantly non-empty loops if the compiler can prove the optimization has no semantic effects visible from within the program but outside the loop (obviously the optimizer doesn't take into account execution time, heat and EMI from the CPU, etc so any such effects will be lost).

    I found this out the hard way when trying to compare the execution time of various algorithms for calculating a simple idempotent function by calling the function millions of times in a loop. Unless I dereferenced a pointer within the function, the optimizer would determine that it was necessary to call the function at most once, ruining my test.
  • Zygo 2008-01-24 22:55
    Zygo:
    GCC 4.1 (and possibly earlier versions) does optimize away loops...


    ...if used with -O2 (maybe -O1 as well). Without a -O flag, or with -O0, GCC won't do any optimizations at all, not even trivial ones.
  • Doug Jones 2008-01-24 23:53
    There's something fiendishly brilliant about it... >:)
  • eric76 2008-01-25 00:20
    Tyler:
    Wayne is a goddamn genius.

    I do something similar at by giving out really long timelines for development and then consistently beating them by huge margins. This also gives me the benefit of extra time if something needs more work than I planned.
    Years ago, working for an engineering company, my boss went into detail about something he wanted done and how he wanted me to do it. He then asked me how long it would take. I thought about it and said that it would take about two weeks.

    That evening after he left, I thought of another approach to solving the problem that would accomplish the same thing but was substantially simpler to program. So I went ahead and did it. It took about an hour or two to finish using this approach.

    The next day, my boss came up to me and told me to go ahead and do it. He was a bit ticked off when I told him I finished it the night before and explained the way I did it.

    I think he waited two weeks to show the results to his boss.
  • Kain0_0 2008-01-25 00:28
    chrismcb:
    Alcari:
    Oh, i've done that.
    Someone complained that sometimes, saving a file was to fast, so he didn't know if it was succesfull, since the statusbar popped in and out to fast.

    Solution:
    If filesize < 2mb
    open FakeSaveAnimation.gif
    end


    There is an ENORMOUS difference between slowing things down for the sake of making you and your department look better, and slowing things down to let the user know something happened.



    I agree with you, to be truly ethical such loops (without another purpose like synchronisation, interfacing with a human, etc) shouldn't be used as a general rule of thumb.

    On the other hand, if your boss is milestone minded and only pays attention to say runtime (or other very small minded and limited statistics) techniques like the above can be useful to show that progress is being made. To be ethical in this case you actually need to have made progress with program faults/refactoring/scaffolding.

    It would also be useful in punishing your bosses for a really crappy design decision, you can simply slow the program down.

    boss: Why is the system so slow?
    tech: Well you know those changes to the xyz runtime...
    boss: yes?
    tech: They don't fit there, we need to move them to abc (the right place)
    boss: Will that speed it all up again?
    tech: Pretty sure.

    (Note: you really only should punish them, if you are very certain it won't work/work well. You shouldn't use it when your simply not getting your way, doing something your uncertain of/don't want to do)

    Sometimes you need to get people to respect your knowledge and skill.
    Sometimes you can only do it by hitting them - even though we don't really want to.
    , it's the difference between being paid to make a crappy house, and being paid to make the house they really will like and need.
  • Amanda 2008-01-25 00:39
    Well, I don't know how common this is, but I'm majoring in Computer Science and at my university we're in the College of Engineering. Second semester senior year, we're required to take an Ethics course.
  • chrome 2008-01-25 00:47
    I've read the discussion on whether or not an empty loop would be optimised away by a modern compiler, but surely a loop such as

    for (i=0; i<10; i++) {;}

    would be optimised to NOP x 10 by most compilers set to 'unroll loops'? And would they then remove redundant NOPs?

    (10 minutes passes)

    Well, after some testing, it seems that gcc won't optimise out loops like this, but Sun Workshop cc (also known as Sun Pro cc) WILL optimise out loops like this with -fast.

    I guess the thing is, if you're inserting bogus delay code to make yourself look better a few months down the line when you 'optimise' the code, use usleep(). :)
  • lgh 2008-01-25 01:12
    um, the story takes place in 1990. November 1989, plus ~(3) months, makes it ~February 1990. :-)
  • schmilblick 2008-01-25 01:59
    Speed-up loop. It's.... it's... insurance, and pure genious!
  • MS 2008-01-25 03:26
    Crash Magnet:
    I thought the 80/20 rule was: The first 80% of the project take 20% of your time, the last 20% of the project takes 80% of the time. But I remember it as the 90/10 rule.
    I think you mixed them up. The 80/20 says that 80% most useful functionality takes only 20% of the total implementation effort, so you better convince your customer to drop the rest. The 90/10 says that implementing 90% of the system takes 90% of the planned implementation time, but the remaining 10% takes another 90% planned time :-P
  • iMalc 2008-01-25 04:07
    Assuming that i is an int (-32768 .. 32767), then thanks to integer overflow, when they drop the first zero off that they'd blow all their future speedup ability in one hit. 100000 is 0x186A0 in hex, which will get stored as 0x86A0, which as a signed int just happens to be negative since the high-bit is set.
    So, since zero is not less than a negative number, the loop will terminate immediately.

    Then they'd have to do some real optimisation work next time!
  • Kiss me I'm Polish 2008-01-25 04:29
    Someone wrote "xor eax, eax" instead of "mov eax, 0".
    Damn, I love it. You save a cycle!
  • Marcus 2008-01-25 04:35
    I love the idea of Windows 2.11, having worked at Philips on a add on toolset for Windows 2, that was released a month before Windows 3 made it obselete by implementing all the features we had carefully crafted to get round the limitations that were no longer there.


  • LogicDaemon 2008-01-25 04:42
    Rene:

    I feel dumb. I can't find the integer overflow. :(


    In more or less every DOS compiler you'll find, int defaults to short, aka a 16-bit integer. 0x000F423F > 0xFFFF

    and it is signed int by default, i.e. INT_MAX = 0x7FFF
  • Andy 2008-01-25 04:55
    mallard:

    The C# compiler in VS2008 certainly doesn't. Built in release mode with optimisation:
    Source:

    static void Main(string[] args)
    {
    for (int i = 0; i < 100000; i++) ;
    Console.WriteLine("Hello World!");
    }


    Disassembly (from Lutz Roeder's .NET Reflector):

    private static void Main(string[] args)
    {
    for (int i = 0; i < 0x186a0; i++)
    {
    }
    Console.WriteLine("Hello World!");
    }

    I thought it was the JIT compiler, which compiles CIL (a.k.a. MSIL) into machine code, that does the optimization in .NET.
  • Arancaytar 2008-01-25 04:56
    ... at least he is honest.
  • GettinSadda 2008-01-25 05:07
    Zygo:
    Zygo:
    GCC 4.1 (and possibly earlier versions) does optimize away loops...


    ...if used with -O2 (maybe -O1 as well). Without a -O flag, or with -O0, GCC won't do any optimizations at all, not even trivial ones.

    So you're saying that if you turn off all optimization your compiler doesn't optimize?

    I'm shocked!
  • GettinSadda 2008-01-25 06:24
    Turbo C++ 3.0, a 1992 compiler, does not optimize the empty loop!

       ;	
    
    ; int main(int argc, char **argv)
    ;
    assume cs:_TEXT
    _main proc near
    push bp
    mov bp,sp
    push si
    ;
    ; {
    ; int i;
    ;
    ; printf("Hello\n");
    ;
    mov ax,offset DGROUP:s@
    push ax
    call near ptr _printf
    add sp,2
    ;
    ;
    ; for(i = 0; i<10000; i++) {;}
    ;
    xor si,si
    jmp short @1@86
    @1@58:
    inc si
    @1@86:
    cmp si,10000
    jl short @1@58
    ;
    ;
    ; return 0;
    ;
    xor ax,ax
    ;
    ; }
    ;
    pop si
    pop bp
    ret
    _main endp
  • John 2008-01-25 06:29
    You didn't work on Vista, did you?
  • [ICR] 2008-01-25 06:39
    "for (i=0; i<10; i++) {;}

    would be optimised to NOP x 10 by most compilers set to 'unroll loops'? And would they then remove redundant NOPs? "

    Don't forget to set i to 9 as well.
  • GettinSadda 2008-01-25 06:52
    [ICR]:
    "for (i=0; i<10; i++) {;}

    would be optimised to NOP x 10 by most compilers set to 'unroll loops'? And would they then remove redundant NOPs? "

    Don't forget to set i to 9 as well.

    I'd rather set it to 10, but if i is not used outside the loop then I would not want it to even exist!
  • JimM 2008-01-25 07:11
    AdT:
    Ben:
    Probably because that loop actually does something?


    It's amazing how many people don't understand the C# syntax although in the case of this loop, it's absolutely identical to that of C or C++.


    And also to Java, javascript, perl, php... anyone else want to add some more? ;^)
  • Bob Holness 2008-01-25 07:47
    You remember "The first 80% of the project take 20% of your time, the last 20% of the project takes 80% of the time." as the 90/10 rule!?
  • AdT 2008-01-25 08:00
    Kiss me I'm Polish:
    Someone wrote "xor eax, eax" instead of "mov eax, 0".
    Damn, I love it. You save a cycle!


    You don't save a cycle but some code size. Compilers (and assembler programmers) have used this idiom for a long time to avoid the need for an immediate 0 in the code segment. Basically, when you write

    mov eax, N

    where N is a constant, the assembler will generate the opcode for "mov eax, (immediate)" followed by N as a 32-bit value. If you even use rax, you get a 64-bit immediate. xor eax, eax however does not require an immediate so results in a much smaller instruction.

    Ironically, using xor eax,eax instead of mov eax,0 could actually slow down execution because the false read dependency can stall out-of-order execution as it is used in modern x86 processors. Because the idiom is so widely used, however, processors treat instructions like this as a simple write, thus avoiding this penalty.

    By the way, the nop mnemonic in x86 assembler is just an alias for xchg eax,eax (assuming 32-bit operand mode). It, too, is special-cased in the processor to avoid false read and in this case also write dependencies.
  • Guido 2008-01-25 08:33
    So ... in those days they didn't have optimizing compilers?

    Mind you, I was still dabbling in GWBasic back then...
  • 543 2008-01-25 08:49
    mallard:
    The C# compiler in VS2008 certainly doesn't. Built in release mode with optimisation:



    But C/C++ compile into machine code, everything what is in compiled code will be executed. Isn't C# code optimized on-the-fly like Java in JVM?
  • PAG 2008-01-25 08:55
    http://blogmiel.blogspot.com/


    Surely it compiles into machine code but it can't always be perfectly optimized for every processor out there.
  • [ICR] 2008-01-25 09:38
    "I'd rather set it to 10, but if i is not used outside the loop then I would not want it to even exist!"

    Sorry, you're right about it being 10.
    But from the code fragment you can't tell that it's not used outside the loop, but you can tell it is in scope outside the loop (since it's not "for (int i = 0...")
  • Tottinge 2008-01-25 09:46
    Actually, the two I know are:

    1) Typically 80% of your problems are in 20% of your code/process/etc.
    2) 20% of the people do 80% of the work

  • Mark 2008-01-25 09:56
    Hilarous.
  • Gabelstaplerfahrer 2008-01-25 09:57
    This reminds me of the story on TDWTF about a guy who wrote some scanner software and found a way to speed it up. It was so fast the customer didn't believe the software actually worked because he didn't have to wait, and the programmer had to slow it down. Later he had to speed the software up again.
  • Cloak 2008-01-25 10:10
    Ben:
    mallard:
    Mick:
    gcc4ef3r:
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...


    I read statements like this again and again, but I'll be damned if I've ever seen a compiler do that. Ok, ok, I've only used the compiler that comes with (various versions of) Visual C++ but it is supposed to optimize some and even with full optimizations on in a release build it's never optimized anything like this away that I've found.



    The C# compiler in VS2008 certainly doesn't. Built in release mode with optimisation:
    Source:

    static void Main(string[] args)
    {
    for (int i = 0; i < 100000; i++) ;
    Console.WriteLine("Hello World!");
    }


    Disassembly (from Lutz Roeder's .NET Reflector):

    private static void Main(string[] args)
    {
    for (int i = 0; i < 0x186a0; i++)
    {
    }
    Console.WriteLine("Hello World!");
    }


    Probably because that loop actually does something?


    well it increments the counter, doen't it?
  • Cloak 2008-01-25 10:13
    clively:
    Ben:
    That is a WTF? No

    That's a GREAT IDEA


    I really hope none of you guys work for Microsoft.


    Under Windoze, even counting the counters does take soooo much time, but when the counters are actually counting aie, aie, aie
  • Cloak 2008-01-25 10:15
    AdT:
    Matthew:
    Hell, in a system like DOS, you may not even make it to the point of overflowing i before the system hangs because you wrote over some video buffer or something.


    What? I've done some video programming under DOS and if there was a memory area with fixed address that you wanted to overwrite with zeroes, it was the video RAM. And no, the system would not hang if you did this.

    The system would rather hang (or probably crash in a more spectacular way) if you overwrote some hardware interrupt handler or some code/data structures used by such a handler. You could prevent that by executing CLI before the loop. Of course that would only make any sense if you wanted to kick DOS from memory altogether, for example to load your own OS instead.


    but when you start wrting into F000 and above the situation changes
  • Cloak 2008-01-25 10:17
    Intelligent Layman:
    > Don't forget it's 1987.

    It's not pretend-to-be-a-time-traveler day. You must really be a time traveller.


    A real-time traveller
  • Henry Miller 2008-01-25 10:43
    [ICR]:
    "I'd rather set it to 10, but if i is not used outside the loop then I would not want it to even exist!"

    Sorry, you're right about it being 10.
    But from the code fragment you can't tell that it's not used outside the loop, but you can tell it is in scope outside the loop (since it's not "for (int i = 0...")


    Unless you know how the compiler puts things on the stack. In which case

    int *j = (int *)(&j + 1);
    for(int i; i<10 ;++i);
    // i not in scope
    foo(*j); // j is a pointer to i.



    The above is of course undefined behavior if not outright illegal. I'd have to study to standard to be sure which. Come to think of it I think this is one area where C and C++ are different. Your compiler is likely to pass it though, if only because you told it to shut up.
  • Talliesin 2008-01-25 12:05
    Matthew:
    My point is that the OBVIOUS error in that code sample was picking a seemingly arbitrary point in memory to write zeros. The integer overflow was actually rather subtle.
    It could be a known memory address in a fixed-memory system. It could be a special memory address that actually writes to a peripheral of some sort. There are quite a few reasons why this might be done in certain low-level cases.

    It's certainly suspicious code, but it's something to question, not necessarily something to declare a bug.
  • Jimmy the K 2008-01-25 12:12
    Doesn't i cycle between -32768 and 32767? I.e. always less than 1000000. Infinite loops really slow things down.
  • Mark 2008-01-25 12:26
    The real WTF is that Al Gore hadn't invented the interwebs yet.

    Otherwise, hilarious story. Loved it.
  • Zylon 2008-01-25 12:32
    MS:
    I think you mixed them up. The 80/20 says that 80% most useful functionality takes only 20% of the total implementation effort, so you better convince your customer to drop the rest. The 90/10 says that implementing 90% of the system takes 90% of the planned implementation time, but the remaining 10% takes another 90% planned time :-P

    As already mentioned several times, this is the Ninety-Ninety Rule.
  • Mark 2008-01-25 12:35
    Did Wayne also invent the Bozo Sort?
  • bandit 2008-01-25 12:46
    I always thought it meant 80% of the complexity was in 20% of the code. There's probably 1000000 definitions though...
  • Niki 2008-01-25 14:13
    Benjamin Normoyle:
    I thought 80/20 was how to completely bullshit situations: when you need a percentage, use either 80 or 20 percent as your number. E.g. : "80 % of this department is slacking off," "20% of companies give free soda to their employees," "20% of experts say YO MAMA."
    I always heard that it was 80% of the work was done by 20% of the people working. Actually I usually heard it as the 90/10 rule, but maybe at the time I was in a WTFesque job environment.
  • Reed 2008-01-25 14:16
    Think 16 bit man.
  • chrismcb 2008-01-25 14:27
    Kain0_0:

    I agree with you, to be truly ethical such loops (without another purpose like synchronisation, interfacing with a human, etc) shouldn't be used as a general rule of thumb.

    On the other hand, if your boss is milestone minded and only pays attention to say runtime (or other very small minded and limited statistics) techniques like the above can be useful to show that progress is being made. To be ethical in this case you actually need to have made progress with program faults/refactoring/scaffolding.


    There is still nothing ethical about it. Whether you are doing actual work on not. There are ways of measuring productivity, other than "the program runs faster"

    And on top of that you are writing a Busy Wait loop. Don't do that. Play nice with others and use a Sleep or its equivalent.

  • Arlie 2008-01-25 14:56
    The really hilarious thing is that a lot of modern C compilers will remove the entire for loop, if they can determine that what's inside the loop has no side effects. And {;} certainly has no side effects.
  • The General 2008-01-25 14:58
    Edward Royce:
    Hmmmm.

    This is why I always include something very obviously wrong in the UI of any application I'm trying to get accepted by management. For some odd reason management types simply cannot accept a program as-is. Instead they all absolutely must make some sort of change to the program or else the program simply isn't "theirs".

    So I include obvious UI defects that they can easily see and order changed. Otherwise they'll really muck up a program and make life a lot harder.

    Interesting. Scott Adams' "The Joy of Work" suggests something similar:

    "Before making any proposal to your boss, insert some decoy steps. The decoys are elements of your plan that you don't actually intend to do. Make sure the decoys are the most obvious parts of your plan." ... "Your boss will notice that the third bullet 'doesn't fit'. He'll demand that you get rid of that step. Put up some resistance (just for show) and then reluctantly agree."

    The example he gives is:

    • Research the market for new toys
    • Design toy
    • Assassinate the president of Chile
    • Produce toy

  • widget 2008-01-25 17:12
    for(i=0;i<1000000;i++) {;}

    And the best part is any optimizing compiler worth its salt will remove the effects of this "speed-up loop"
  • AdT 2008-01-25 18:29
    543:
    But C/C++ compile into machine code, everything what is in compiled code will be executed. Isn't C# code optimized on-the-fly like Java in JVM?


    .NET byte code is optimized on-the-fly. This includes C# but also VB.NET, C++/CLI and numerous other languages.

    Yes, I love threads where I can show off my super-smartass powers. Just keep that kryptonite away from me.
  • [ICR] 2008-01-25 19:59
    "VB.NET, C++/CLI"
    That makes it sound like C++ and CLI are the same :S
  • chrome 2008-01-25 22:19
    [quote="widget"]for(i=0;i<1000000;i++) {;}

    And the best part is any optimizing compiler worth its salt will remove the effects of this "speed-up loop"[/quote]

    read back a bit; the conclusion is that it's dependent on the flags and compiler used. GCC doesn't, under any flag.
  • Mickeyd 2008-01-25 22:36
    Gee and all these years I thought that you figured out the time it would take, divided by two multiplied the result by 4 and added 2. E.G. Task time=((ET/2)*4)+2 Where "ET" equal actual time required.
  • Dave 2008-01-26 03:42
    AdT:

    Changing the loop condition to "false", which is equivalent to what Visual C++ does, is, however, invalid with or without a volatile qualifier on someThreadVariable.

    Please cite the relevant section of the C standard that supports your claim. Both the MSVC and gcc teams disagree with you. Here's a free copy of the C99 draft: http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1124.pdf
  • bg 2008-01-26 14:14
    Jay:
    Also, it really should check the system clock rather than just counting. That way you'd get consistent performance savings regardless of the speed of the processor.
    But that way you would have to remove zeroes instead of just telling the customers to buy faster hardware ;-)
  • lmb 2008-01-26 14:20
    This is not WTF. This is BOFH.
  • anon 2008-01-26 19:14
    widget:
    for(i=0;i<1000000;i++) {;}

    And the best part is any optimizing compiler worth its salt will remove the effects of this "speed-up loop"


    It better bloody not.
    If go to the effort of adding a loop that increments an integer a million times, I bloody well want that integer incremented a million times.

    Also, in 1990, most compilers were pretty crappy.
    In fact, in 2008 most compilers are still pretty crappy.
  • real_aardvark 2008-01-26 19:47
    AdT:
    Kiss me I'm Polish:
    Someone wrote "xor eax, eax" instead of "mov eax, 0".
    Damn, I love it. You save a cycle!


    You don't save a cycle but some code size. Compilers (and assembler programmers) have used this idiom for a long time to avoid the need for an immediate 0 in the code segment. Basically, when you write

    mov eax, N

    where N is a constant, the assembler will generate the opcode for "mov eax, (immediate)" followed by N as a 32-bit value. If you even use rax, you get a 64-bit immediate. xor eax, eax however does not require an immediate so results in a much smaller instruction.

    Ironically, using xor eax,eax instead of mov eax,0 could actually slow down execution because the false read dependency can stall out-of-order execution as it is used in modern x86 processors. Because the idiom is so widely used, however, processors treat instructions like this as a simple write, thus avoiding this penalty.

    By the way, the nop mnemonic in x86 assembler is just an alias for xchg eax,eax (assuming 32-bit operand mode). It, too, is special-cased in the processor to avoid false read and in this case also write dependencies.

    As I recall, and it's been twenty years now, the real point of this idiom is to clear various flags as a side effect. It's been around twenty years now, but that's the way I remember it (either on Z80 or i8086).

    And yes, I could look it up, but it's way past my bed-time.
  • Chris M. 2008-01-27 14:27
    No, it's the rule that says your program spends 80% of its time in 20% of its code (so that's the code you should optimize). Or, alternatively, 20% of your code will take 80% of your development man-hours (not necessarily the same 20% that accounts for 80% of your run time).
  • ko1 2008-01-27 17:09
    Three months after November 1989 is January 1990.
  • Rhialto 2008-01-28 05:45
    foo:
    Mike:
    Err, maybe I'm missing something--but given the 16-bit overflow, wouldn't the "speed-up loop" be infinite? Or is that the real WTF?


    On a 16-bit machine the loop should run 16960 times.


    Given a loop like

    for(i=0;i<1000000;i++) {;}

    with a 16-bit compiler, the constant given would not fit in an int and therefore automatically be a long int. That is the rule given in K&R second edition (1988).
    The comparison would then be done by comparing an extension to long of i with 1000000.
    Then, variable i would overflow before reaching 1000000, and the loop would be infinite.
  • Rhialto 2008-01-28 05:51
    Carnildo:
    with additional guarantees of

    sizeof(void *) == sizeof(int)

    I don't think there were ever C compilers that guaranteed that.
    And even if there were, they would be so old that they didn't know about "void" yet.
  • fred 2008-01-28 06:24
    It is fake: in 1987, a 1 to 1 million loop would take a very very noticeable amount of time (ie: in the seconds magintude order). Having this for each screen refresh ? I call b*s!t.

    Anyway, empty loops in screen updating code on early PC were common because you had "snow" effects if you update the screen too frequently...
  • Marc K. 2008-01-28 22:59
    That explains SO much. msiexec.exe must be full of these "speedup loops". That's the only thing that can possibly explain why MSI installs are so excruciatingly slow that you want to gouge your eyes out with a sharp object at the thought of running an MSI install.
  • Evil Otto 2008-01-29 09:51
    Then the user would complain about having to click OK every time they saved something.

    Sometimes the clients need to be protected from themselves.
  • EA 2008-01-29 21:17
    Actually, the rule is that 80% of the work takes 20% of the time, and the other 20% of the work takes 80% of the time.
  • david 2008-01-31 03:26
    fred:
    It is fake: in 1987, a 1 to 1 million loop would take a very very noticeable amount of time (ie: in the seconds magintude order). Having this for each screen refresh ? I call b*s!t.

    Anyway, empty loops in screen updating code on early PC were common because you had "snow" effects if you update the screen too frequently...


    WTF? random delays don't solve that problem unless you are lucky. You need to synchronize with the screen update, which you do by hooking an interrupt or looping until the video register clears - not an empty loop
  • david 2008-01-31 03:46
    Teh Optermizar:
    JonC:


    It tells you that it's 1024 words on the front page


    Aha... I came in via a direct link in the RSS feed, so I didn't see that... its only 892 according to Word (904 if you include the title and by-line gubbins)

    Obviously this is an indication that Microsoft is short-changing me on words... bastards... :P


    Last I knew, editor word count was designed for reporters and other people who actually used it - and it was a letter count, because newspapers and magazine specified, laid out, and paid that way. So 892 'words' in most editors meant something like 3568 characters. Or has that changed?
  • david 2008-01-31 04:11
    Chmeee:
    GCC will not optimize away the empty loop. It is documented as such, because GCC assumes that such a loop is purely for delay purposes, so optimizing it away is undesirable.


    gcc can and does optimise away an empty loop, or anything after an absolute return statement. It will optimise away
    for (i = 0; i < 1000000;i++);
    for (i = 0; i < 1000000;i++){}
    for (i = 0; i < 1000000;i++){i=i;}
    However, it will not optimise away
    for (u = 0; u < -1;u++){
    process_command(wait_for_command());
    }
    Or other obvious optimisations, even on -Os, and of course suffers from the limitations of the language definition when it comes to optimisation. Part of that has been addressed by changes to the language definition (for example, strict aliasing), but it is still a far cry from languages which were designed with code optimisation in mind.


  • int19h 2008-02-02 06:20
    The loop will get optimized by the JIT compiler later on. Most .NET compilers don't optimize aggressively, because they know that all the serious optimizations (e.g. function inlining, even constant folding) will be done at run-time.

    The way to check is to run the program in VS and switch to disassebly. Only you first need to find the obscure "Suppress JIT optimizations when debugging" checkbox somewhere in VS options dialog, and turn it off; only then you see what actually is executed when your program runs standalone.
  • Chris 2008-02-06 04:27
    It's in the for loop. A signed int is limited to 32768, but the loop goes way further. I feel dumb because i can't remember how to work out how that pointer would look after the loop.
  • Majax 2008-02-18 14:50
    in the for loop he increments an int from 0 to 1000000

    int i;
    for (i = 0; i < 1000000;i++)

    problem is an int doesn't pass 65535

    check: http://en.wikipedia.org/wiki/65535_(number)
  • Majax 2008-02-18 14:56
    I think the int would be -32768 if signed or go back to 0 if unsigned.

    PS: Sorry, 65535 is the limit for unsigned int Chris is right about the 32767 limit for signed int.
  • Mkay 2008-09-17 09:04
    WTF ?!
    you don't make a empty loop that is consuming resources (=CPU) and doing nothing (=waiting for some threading variable) !

    go and see some thread synchronization methods please and don't spread this nonsense...
  • Mkay 2008-09-17 09:07
    sorry forgot the quote the guy with the

    someThreadVariable = 400;
    while (someThreadVariable != 500);
    DeleteAllFilesInUsersHomeDir();


    ^^^^ idea
  • robardin 2009-02-11 20:51
    OK, so they're 16-bit integers and (presumably) pointer addresses as well. But without a malloc() or init to a static buffer, won't the line

    char *p = 0x10000;

    simply result in a pointer to who-knows-where, and the deref in the loop:

    *p++ = 0;

    an immediate crash? (Or is this some ancient DOS magic memory address for some kind of device or O/S hook?)
  • Daniel Smedegaard Buus 2009-04-06 17:44
    TRRWTF here is that Windows 2.11 was on the market.
  • Toni N 2009-05-28 09:40
    This story is such a classic gem, I keep coming back to read it again and again. The word "speed-up loop" has become a standard term where I work :)
  • Jimmy the Geek 2009-08-20 13:22
    80 20 rule is also known as the Pareto principle.

    In software, the 80/20 rule is the ratio between program completion and time. You can get 80% of the work done in 20% of the time. The last 20% of work takes 80% of the time.
  • Kuba 2009-09-21 14:32
    SomeCoder:
    Mick:
    gcc4ef3r:
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...
    I read statements like this again and again, but I'll be damned if I've ever seen a compiler do that. Ok, ok, I've only used the compiler that comes with (various versions of) Visual C++ but it is supposed to optimize some and even with full optimizations on in a release build it's never optimized anything like this away that I've found.
    It will optimize it away. If you just set to Release mode in Visual Studio, that isn't full optimizations. You need to tell it that you want it to run as fast as possible too.

    One time, I tried something like this:

    while (true);

    Set the mode to Release and the optimizations to "Speed" and voila, program runs, no infinite loop.
    If any version of VS really did what you wrote, it's a bug, and should be reported yada yada. Hopefully after you re-read your post, it'll be obvious why. This code doesn't "do nothing". It halts. IIRC, C spec does not allow optimizing halts away.

    Cheers, Kuba
  • Mark 2009-10-09 21:06
    VolodyA! V A:
    The funny for me was the google ad on this exact page:

    "Slow computer? Speed it up by cleaning your registry..." maybe there is some NumberOfZerosInSpeedupLoop in Windows registry?


    I am positive some large? packages use a loop to make you think your getting some real meat when you install...

    Take Adobe CS2 - I mean how much does an install really have to do? copy some files and set the reg? Why is this taking 10-15 mins!!

    Watch it sometime. No disking, no cpu. Its a scam!



  • Mark 2009-10-09 21:40
    Kain0_0:
    chrismcb:
    Alcari:
    Oh, i've done that.
    Someone complained that sometimes, saving a file was to fast, so he didn't know if it was succesfull, since the statusbar popped in and out to fast.

    Solution:
    If filesize < 2mb
    open FakeSaveAnimation.gif
    end


    There is an ENORMOUS difference between slowing things down for the sake of making you and your department look better, and slowing things down to let the user know something happened.



    I agree with you, to be truly ethical such loops (without another purpose like synchronisation, interfacing with a human, etc) shouldn't be used as a general rule of thumb.

    On the other hand, if your boss is milestone minded and only pays attention to say runtime (or other very small minded and limited statistics) techniques like the above can be useful to show that progress is being made. To be ethical in this case you actually need to have made progress with program faults/refactoring/scaffolding.

    It would also be useful in punishing your bosses for a really crappy design decision, you can simply slow the program down.

    boss: Why is the system so slow?
    tech: Well you know those changes to the xyz runtime...
    boss: yes?
    tech: They don't fit there, we need to move them to abc (the right place)
    boss: Will that speed it all up again?
    tech: Pretty sure.

    (Note: you really only should punish them, if you are very certain it won't work/work well. You shouldn't use it when your simply not getting your way, doing something your uncertain of/don't want to do)

    Sometimes you need to get people to respect your knowledge and skill.
    Sometimes you can only do it by hitting them - even though we don't really want to.
    , it's the difference between being paid to make a crappy house, and being paid to make the house they really will like and need.



    "Sometimes you need to get people to respect your knowledge and skill. Sometimes you can only do it by hitting them - even though we don't really want to."

    Omg, lol. You lie!

  • forch 2011-11-11 12:58
    Benjamin Normoyle:
    I thought 80/20 was how to completely bullshit situations: when you need a percentage, use either 80 or 20 percent as your number. E.g. : "80 % of this department is slacking off," "20% of companies give free soda to their employees," "20% of experts say YO MAMA."


    It's actually 80% of a projects work is done in 20% of the calculated time, the rest of the 20% takes up 80% of the time ;)
  • StJohn 2011-12-02 03:59
    I admit doing this on a web application some seven years ago.
    I just added a wait function, setTimeout(), with random time between say 500 and 5000 ms when saving a form. This way the waiting changed everytime and made it less obvious that the application was unoptimized.

    Then I just tweaked the values down a bit to improve speed.
  • Cbuttius 2012-07-31 10:24
    It isn't a speed-up loop of course, although I would use sleep() to not absorb CPU.

    It's what is known in the market as the discriminatory pricing model, and it works in a way that you extract from people what they are willing to pay.

    So sometimes you purposely provide an inferior product so people who don't want to pay you as much can still have a product that works, whilst the rich ones will pay you more for a superior one.

    Normally that applies where you have a large market and want to sell to as many people as you can, but in this case it is sort of twisted to work for the current employer. Pay us more and we'll make your app faster..

    Of course, here as they are "removing a 0" each time, it will only work until they run out of zeros to remove so they'd better have this in several places.
  • agtrier 2012-08-06 08:29
    OMG! I think I was working as support for the same application. Seriously!
  • agtrier 2012-08-06 08:31
    In my experience, the first 80% of the project take up 120% of the time, the other 20% take another 80%.
  • ObiWayneKenobi 2013-06-06 11:03
    A job like this would be a godsend in the modern day. 1 hour doing menial tasks, and the rest of the time could be spent learning the latest and greatest newfangled JavaScript frameworks or whatever. The only thing better would be a super lax IT environment where you could play games and stuff on your computer.

    Get a fat paycheck for 1 hour of work and 7 hours of doing whatever you want? Fuck yes. Call me unethical all you want, but you know that would be paradise.
  • Lol 2014-04-27 07:13
    Several year late reply, software engineer here from the future. The 80 20 rule is that. 80% of something in a business is caused by 20% of something. It is kind if a joke rule, but seems to hold true nonetheless. E.g 80% of bugs are caused by 20% of employees. 20% of sales are caused by 80% of employees and so on. Wikipedia it. Not sure if we had that in 2008... I think we did but people complained about reliability of citations.
  • Yuval 2014-09-18 07:14
    I heard that the 80/20 rule, is about gaining 80% success using 20% effort. (and don't try to understand what it means)