• (cs)

    i done worked with inline assembly all my life, yessir

    i tell you what

  • Tom (unregistered)

    It's......it's brilliant.

  • anonymous_coder() (unregistered)
    <head-desk> Sounds like some of the mechanics I knew, who would break things that were almost broke to keep from having to do hard jobs - windshield wipers, mirrors, etc.

    On the other hand, it's a great way to keep a PHB happy...

  • (cs)

    Speed Loops... BRILLIANT!

    I guess that's what happens when you need to "fill the void"

  • (cs)

    That's...pretty crummy. If they would have put as much effort into creating the GUI as they did avoiding work, then we might have had a real alternative to Microsoft.

  • Tyler (unregistered)

    Wayne is a goddamn genius.

    I do something similar at by giving out really long timelines for development and then consistently beating them by huge margins. This also gives me the benefit of extra time if something needs more work than I planned.

  • (cs)

    I feel dumb. I can't find the integer overflow. :(

    char *p only goes up to 0x0010423F which is far below the maximum of 0xFFFFFFFF.

    and int i only goes up to 0x000F423F which is also below the maximum of 0x7FFFFFFF.

    The only WTFs I see are a hardcoded memory address, which I suppose could be acceptable depending on what hardware and operating system the code is run under, and the setting of 1000000 bytes to 0 when there's probably a better way to do it (memset is one) although what improvements can be made depends on how the set memory is used (if it is later read without setting and the program depends on 0 for initial values then there's probably no way around it).

    [Edit: Wait... it makes more sense if those were int16s... were they?]

  • Alex G. (unregistered)

    That is brilliant.

    Really. I'm in awe, this guy is a genius.

  • Crash Magnet (unregistered)

    I thought the 80/20 rule was: The first 80% of the project take 20% of your time, the last 20% of the project takes 80% of the time. But I remember it as the 90/10 rule.

  • Rene (unregistered) in reply to The MAZZTer
    I feel dumb. I can't find the integer overflow. :(

    In more or less every DOS compiler you'll find, int defaults to short, aka a 16-bit integer. 0x000F423F > 0xFFFF

  • (cs)

    Building in future optimizations is a pretty standard technique. In certain organizations you simply have to prepare for defending yourself.

    Very similar to hardware engineers who purposely design in expensive components, knowing they'll be asked to go through a cost reduction phase later.

    No WTF here; just sensible self-defense.

  • smooth menthol flavor (unregistered) in reply to The MAZZTer

    ints were probably only two-byte words back in 1989. int i could only have gone up to 32767.

  • Benjamin Normoyle (unregistered)

    I thought 80/20 was how to completely bullshit situations: when you need a percentage, use either 80 or 20 percent as your number. E.g. : "80 % of this department is slacking off," "20% of companies give free soda to their employees," "20% of experts say YO MAMA."

  • Tiver (unregistered) in reply to The MAZZTer
    The MAZZTer:
    I feel dumb. I can't find the integer overflow. :( <snip> [Edit: Wait... it makes more sense if those were int16s... were they?]

    Correct, which in the days of Windows 2, they most definitely were.

  • sweavo (unregistered) in reply to Rene
    Rene:
    I feel dumb. I can't find the integer overflow. :(

    In more or less every DOS compiler you'll find, int defaults to short, aka a 16-bit integer. 0x000F423F > 0xFFFF

    Don't forget it's 1987.

    Wayne is a Genius. Where is he working now? Can I apply?

  • sweavo (unregistered) in reply to sweavo
    Don't forget it's 1987.

    oh wait, that would be three months after 1986 not 1989, hang on...

    Don't forget it's 198L

  • Keith (unregistered)

    At last!

    Finally, an explanation of why Lotus Notes is so &^%@&$ SLOW!!!

  • (cs)

    I had a uni lecturer explain how we should do things like this. He said it was great as a contractor as he'd get calls back from past clients asking if he could update their system to run faster, he'd do the same thing as the OP said; shorten some un-necessary purposefully placed loop, say it took 6+ hours doing it and voila, instant pay day.

  • Justin (unregistered)

    Wow. What a BOFH, or perhaps a BPFH.

  • Anon Coward (unregistered)

    I worked on a large consumer application that's on at least version 12 now. The code is sprinkled with sleep(10000) everywhere. Some were hold-overs from past days when you were waiting for a supposed synchronous hardware operation to complete, other times it was because the original programmer didn't understand multi-threading and IPC, so the sleep was to let another thread finish an operation or avoid hammering on a shared resource. Our 'Architect' would regularly go back and reduce the numbers and claim wonderful improvements in how fast the program now ran. Alas, his name wasn't Wayne.

  • Jay (unregistered)

    I thought the 80/20 rule was: The first 20% of the project takes 80% of the allotted time, while the remaining 80% of the project takes another 80% of the allotted time.

    The big flaw I see to their speed-up loops is that they should have made the ending value a parameter read in from a config file or something of the sort. Then you wouldn't have to recompile to change the delay. Also, it really should check the system clock rather than just counting. That way you'd get consistent performance savings regardless of the speed of the processor.

  • Battra (unregistered)

    I've done that myself, and it's still in production. The client complained that he couldn't see the little animated widget I'd put in for the AJAX call, so I added a .5 second delay on the server side of the AJAX call. Result, happy (but stupid) client.

  • gcc4ef3r (unregistered)

    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...

  • (cs)

    Can't quite figure out why the last contractor "just left all of a sudden." Did he decide he wanted a real job instead?

  • (cs) in reply to gcc4ef3r

    not back then, optimization is a quite new field

  • (cs)

    Wayne and Wane really need to work out their split personality issues.

  • Alcari (unregistered)

    Oh, i've done that. Someone complained that sometimes, saving a file was to fast, so he didn't know if it was succesfull, since the statusbar popped in and out to fast.

    Solution: If filesize < 2mb open FakeSaveAnimation.gif end

  • VolodyA! V A (unregistered)

    The funny for me was the google ad on this exact page:

    "Slow computer? Speed it up by cleaning your registry..." maybe there is some NumberOfZerosInSpeedupLoop in Windows registry?

  • Mick (unregistered) in reply to gcc4ef3r
    gcc4ef3r:
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...

    I read statements like this again and again, but I'll be damned if I've ever seen a compiler do that. Ok, ok, I've only used the compiler that comes with (various versions of) Visual C++ but it is supposed to optimize some and even with full optimizations on in a release build it's never optimized anything like this away that I've found.

  • jon (unregistered)

    Isn't the 80/20 rule... 80% of the work is done by 20% of the people? No wait, I'm thinking of the 90/10 rule.

  • SomeCoder (unregistered) in reply to Mick
    Mick:
    gcc4ef3r:
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...

    I read statements like this again and again, but I'll be damned if I've ever seen a compiler do that. Ok, ok, I've only used the compiler that comes with (various versions of) Visual C++ but it is supposed to optimize some and even with full optimizations on in a release build it's never optimized anything like this away that I've found.

    It will optimize it away. If you just set to Release mode in Visual Studio, that isn't full optimizations. You need to tell it that you want it to run as fast as possible too.

    One time, I tried something like this:

    while (true);

    Set the mode to Release and the optimizations to "Speed" and voila, program runs, no infinite loop.

    Back in the late 80s though, there probably weren't optimizations like this so that for loop would have happily done nothing... 1000000 times.

  • (cs) in reply to Mick
    Mick:
    gcc4ef3r:
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...

    I read statements like this again and again, but I'll be damned if I've ever seen a compiler do that. Ok, ok, I've only used the compiler that comes with (various versions of) Visual C++ but it is supposed to optimize some and even with full optimizations on in a release build it's never optimized anything like this away that I've found.

    The C# compiler in VS2008 certainly doesn't. Built in release mode with optimisation: Source:
    static void Main(string[] args) { for (int i = 0; i < 100000; i++) ; Console.WriteLine("Hello World!"); }

    Disassembly (from Lutz Roeder's .NET Reflector):

    private static void Main(string[] args) { for (int i = 0; i < 0x186a0; i++) { } Console.WriteLine("Hello World!"); }
  • jtl (unregistered) in reply to Alcari
    Alcari:
    Oh, i've done that. Someone complained that sometimes, saving a file was to fast, so he didn't know if it was succesfull, since the statusbar popped in and out to fast.

    Solution: If filesize < 2mb open FakeSaveAnimation.gif end

    Apparently giving the user a 'save complete' message was too much hassle?

  • AdT (unregistered) in reply to SomeCoder
    SomeCoder:
    One time, I tried something like this:

    while (true);

    Set the mode to Release and the optimizations to "Speed" and voila, program runs, no infinite loop.

    Back in the late 80s though, there probably weren't optimizations like this so that for loop would have happily done nothing... 1000000 times.

    What you mentioned is not an optimization, it's a breaking change. Removing an infinite loop is not a speed-up, it's a major semantic change. Just imagine you had put "DeleteAllFilesInUsersHomeDir();" after the loop.

    Visual C++ is notorious for "optimizing" code in ways that break it. I encountered such cases myself and have seen co-workers run into similar issues where the Debug code ran fine and the Release version screwed up big time and disabling some optimizations resolved these issues.

  • (cs) in reply to Mick
    Mick:
    gcc4ef3r:
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...

    I read statements like this again and again, but I'll be damned if I've ever seen a compiler do that. Ok, ok, I've only used the compiler that comes with (various versions of) Visual C++ but it is supposed to optimize some and even with full optimizations on in a release build it's never optimized anything like this away that I've found.

    WRONG!!!

    // OptTest.cpp : Defines the entry point for the console application.
    //
    
    #include "stdafx.h"
    
    int _tmain(int argc, _TCHAR* argv[])
    {
    	const char *Before = "Before\n";
    	const char *After = "After\n";
    
    	printf(Before);
    00401000  push        offset string "Before\n" (406104h) 
    00401005  call        printf (40101Ah) 
    	
    	int i;
    	for(i=0;i<1000000;i++) {;}
    	
    	printf(After);
    0040100A  push        offset string "After\n" (4060FCh) 
    0040100F  call        printf (40101Ah) 
    00401014  add         esp,8 
    
    	return 0;
    00401017  xor         eax,eax 
    }
    00401019  ret
  • ChiefCrazyTalk (unregistered) in reply to Jay
    Jay:
    I thought the 80/20 rule was: The first 20% of the project takes 80% of the allotted time, while the remaining 80% of the project takes another 80% of the allotted time.

    The big flaw I see to their speed-up loops is that they should have made the ending value a parameter read in from a config file or something of the sort. Then you wouldn't have to recompile to change the delay. Also, it really should check the system clock rather than just counting. That way you'd get consistent performance savings regardless of the speed of the processor.

    Ummm its the other way around.

  • Ozz (unregistered) in reply to Crash Magnet
    Crash Magnet:
    I thought the 80/20 rule was: The first 80% of the project take 20% of your time, the last 20% of the project takes 80% of the time. But I remember it as the 90/10 rule.
    It's actually the 90/90 rule. The first 90% of the work takes 90% of the time, and the remaining 10% of the work takes the other 90% of the time.
  • (cs) in reply to mallard
    mallard:
    The C# compiler in VS2008 certainly doesn't. Built in release mode with optimisation: <snip>

    There's no optimization the compiler could do there. Removing the for loop would change the behavior of the program (by not printing out the "Hello World!"). If you removed the WriteLine, then it should remove the loop.

  • Matthew (unregistered) in reply to savar

    Integer overflow!? How is that the "obvious" error? He had me at *p = 0x10000;. I'm sorry, but if you're picking an arbitrary memory address and writing 1 million zeros starting at that point, the LEAST of your problems will be the integer overflow of i. Hell, in a system like DOS, you may not even make it to the point of overflowing i before the system hangs because you wrote over some video buffer or something.

  • derula (unregistered)

    TRWTF is that the article has 2^10 words.

  • AdT (unregistered) in reply to SuperousOxide
    SuperousOxide:
    There's no optimization the compiler could do there. Removing the for loop would change the behavior of the program (by not printing out the "Hello World!").

    No, it wouldn't. In either case, "Hello World!" is printed exactly once.

  • Ben (unregistered) in reply to mallard
    mallard:
    Mick:
    gcc4ef3r:
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...

    I read statements like this again and again, but I'll be damned if I've ever seen a compiler do that. Ok, ok, I've only used the compiler that comes with (various versions of) Visual C++ but it is supposed to optimize some and even with full optimizations on in a release build it's never optimized anything like this away that I've found.

    The C# compiler in VS2008 certainly doesn't. Built in release mode with optimisation: Source:
    static void Main(string[] args) { for (int i = 0; i < 100000; i++) ; Console.WriteLine("Hello World!"); }

    Disassembly (from Lutz Roeder's .NET Reflector):

    private static void Main(string[] args) { for (int i = 0; i < 0x186a0; i++) { } Console.WriteLine("Hello World!"); }

    Probably because that loop actually does something?

  • (cs) in reply to GettinSadda
    Mick:
    gcc4ef3r:
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...

    I read statements like this again and again, but I'll be damned if I've ever seen a compiler do that. Ok, ok, I've only used the compiler that comes with (various versions of) Visual C++ but it is supposed to optimize some and even with full optimizations on in a release build it's never optimized anything like this away that I've found.

    WRONG TAKE 2!!!

    Same code compiled in Visual C++ 6

    00401000   push        406038h
    00401005   call        00401020
    0040100A   push        406030h
    0040100F   call        00401020
    00401014   add         esp,8
    00401017   xor         eax,eax
    00401019   ret
    0040101A   nop
    0040101B   nop
    0040101C   nop
    0040101D   nop
    0040101E   nop
    0040101F   nop
    
  • Ben (unregistered)

    That is a WTF? No

    That's a GREAT IDEA

  • (cs) in reply to SuperousOxide
    SuperousOxide:
    mallard:
    The C# compiler in VS2008 certainly doesn't. Built in release mode with optimisation: <snip>

    There's no optimization the compiler could do there. Removing the for loop would change the behavior of the program (by not printing out the "Hello World!"). If you removed the WriteLine, then it should remove the loop.

    No, the program only prints out "Hello World!" once, after the loop. Notice the ";" at the end of "for (int i = 0; i < 100000; i++) ;", that is, as the disassembly showed, an empty loop body, equivalent to "{ }".

    Besides, even with the "Console.WriteLine("Hello World!");" removed (commented out), the disassembly becomes:

    private static void Main(string[] args) { for (int i = 0; i < 0x186a0; i++) { } }

    The loop still remains.

  • (cs) in reply to Ben
    Ben:
    That is a WTF? No

    That's a GREAT IDEA

    I really hope none of you guys work for Microsoft.

  • AdT (unregistered) in reply to Matthew
    Matthew:
    Hell, in a system like DOS, you may not even make it to the point of overflowing i before the system hangs because you wrote over some video buffer or something.

    What? I've done some video programming under DOS and if there was a memory area with fixed address that you wanted to overwrite with zeroes, it was the video RAM. And no, the system would not hang if you did this.

    The system would rather hang (or probably crash in a more spectacular way) if you overwrote some hardware interrupt handler or some code/data structures used by such a handler. You could prevent that by executing CLI before the loop. Of course that would only make any sense if you wanted to kick DOS from memory altogether, for example to load your own OS instead.

  • AdT (unregistered) in reply to Ben
    Ben:
    Probably because that loop actually does something?

    It's amazing how many people don't understand the C# syntax although in the case of this loop, it's absolutely identical to that of C or C++.

    This is a loop whose body consists of the empty statement. It has no effect on program semantics.

  • SomeCoder (unregistered) in reply to AdT
    AdT:
    SomeCoder:
    One time, I tried something like this:

    while (true);

    Set the mode to Release and the optimizations to "Speed" and voila, program runs, no infinite loop.

    Back in the late 80s though, there probably weren't optimizations like this so that for loop would have happily done nothing... 1000000 times.

    What you mentioned is not an optimization, it's a breaking change. Removing an infinite loop is not a speed-up, it's a major semantic change. Just imagine you had put "DeleteAllFilesInUsersHomeDir();" after the loop.

    Visual C++ is notorious for "optimizing" code in ways that break it. I encountered such cases myself and have seen co-workers run into similar issues where the Debug code ran fine and the Release version screwed up big time and disabling some optimizations resolved these issues.

    The point was not that the loop was infinite; it was that the loop did nothing. Consider:

    someThreadVariable = 400;

    while (someThreadVariable != 500);

    DeleteAllFilesInUsersHomeDir();

    If we aren't careful with threading, someThreadVariable could be changed by another thread. So the current thread should "pause" until the other thread changes the variable to 500.

    If you run this in debug mode, it will work as you would expect. If you turn on optimizations, Visual Studio will see this as a loop that does nothing and remove it and the "pause" that we would expect is gone.

    NOTE: I am not saying that this type of coding is a good idea! It's just an example!!

  • Intelligent Layman (unregistered) in reply to sweavo

    Don't forget it's 1987.

    It's not pretend-to-be-a-time-traveler day. You must really be a time traveller.

Leave a comment on “The Speed-up Loop”

Log In or post as a guest

Replying to comment #:

« Return to Article