• Evil Otto (unregistered) in reply to jtl

    Then the user would complain about having to click OK every time they saved something.

    Sometimes the clients need to be protected from themselves.

  • EA (unregistered) in reply to Benjamin Normoyle

    Actually, the rule is that 80% of the work takes 20% of the time, and the other 20% of the work takes 80% of the time.

  • david (unregistered) in reply to fred
    fred:
    It is fake: in 1987, a 1 to 1 million loop would take a very very noticeable amount of time (ie: in the seconds magintude order). Having this for each screen refresh ? I call b*s!t.

    Anyway, empty loops in screen updating code on early PC were common because you had "snow" effects if you update the screen too frequently...

    WTF? random delays don't solve that problem unless you are lucky. You need to synchronize with the screen update, which you do by hooking an interrupt or looping until the video register clears - not an empty loop

  • david (unregistered) in reply to Teh Optermizar
    Teh Optermizar:
    JonC:

    It tells you that it's 1024 words on the front page

    Aha... I came in via a direct link in the RSS feed, so I didn't see that... its only 892 according to Word (904 if you include the title and by-line gubbins)

    Obviously this is an indication that Microsoft is short-changing me on words... bastards... :P

    Last I knew, editor word count was designed for reporters and other people who actually used it - and it was a letter count, because newspapers and magazine specified, laid out, and paid that way. So 892 'words' in most editors meant something like 3568 characters. Or has that changed?

  • david (unregistered) in reply to Chmeee
    Chmeee:
    GCC will not optimize away the empty loop. It is documented as such, because GCC assumes that such a loop is purely for delay purposes, so optimizing it away is undesirable.

    gcc can and does optimise away an empty loop, or anything after an absolute return statement. It will optimise away for (i = 0; i < 1000000;i++); for (i = 0; i < 1000000;i++){} for (i = 0; i < 1000000;i++){i=i;} However, it will not optimise away for (u = 0; u < -1;u++){ process_command(wait_for_command()); } Or other obvious optimisations, even on -Os, and of course suffers from the limitations of the language definition when it comes to optimisation. Part of that has been addressed by changes to the language definition (for example, strict aliasing), but it is still a far cry from languages which were designed with code optimisation in mind.

  • int19h (unregistered) in reply to mallard

    The loop will get optimized by the JIT compiler later on. Most .NET compilers don't optimize aggressively, because they know that all the serious optimizations (e.g. function inlining, even constant folding) will be done at run-time.

    The way to check is to run the program in VS and switch to disassebly. Only you first need to find the obscure "Suppress JIT optimizations when debugging" checkbox somewhere in VS options dialog, and turn it off; only then you see what actually is executed when your program runs standalone.

  • Chris (unregistered) in reply to Rene

    It's in the for loop. A signed int is limited to 32768, but the loop goes way further. I feel dumb because i can't remember how to work out how that pointer would look after the loop.

  • Majax (unregistered) in reply to Rene

    in the for loop he increments an int from 0 to 1000000

    int i; for (i = 0; i < 1000000;i++)

    problem is an int doesn't pass 65535

    check: http://en.wikipedia.org/wiki/65535_(number)

  • Majax (unregistered) in reply to Chris

    I think the int would be -32768 if signed or go back to 0 if unsigned.

    PS: Sorry, 65535 is the limit for unsigned int Chris is right about the 32767 limit for signed int.

  • Mkay (unregistered) in reply to SomeCoder

    WTF ?! you don't make a empty loop that is consuming resources (=CPU) and doing nothing (=waiting for some threading variable) !

    go and see some thread synchronization methods please and don't spread this nonsense...

  • Mkay (unregistered) in reply to Vilfredo Pareto

    sorry forgot the quote the guy with the

    someThreadVariable = 400; while (someThreadVariable != 500); DeleteAllFilesInUsersHomeDir();

    ^^^^ idea

  • robardin (unregistered)

    OK, so they're 16-bit integers and (presumably) pointer addresses as well. But without a malloc() or init to a static buffer, won't the line

    char *p = 0x10000;
    

    simply result in a pointer to who-knows-where, and the deref in the loop:

    *p++ = 0;
    

    an immediate crash? (Or is this some ancient DOS magic memory address for some kind of device or O/S hook?)

  • Daniel Smedegaard Buus (unregistered)

    TRRWTF here is that Windows 2.11 was on the market.

  • Toni N (unregistered)

    This story is such a classic gem, I keep coming back to read it again and again. The word "speed-up loop" has become a standard term where I work :)

  • Jimmy the Geek (unregistered)

    80 20 rule is also known as the Pareto principle.

    In software, the 80/20 rule is the ratio between program completion and time. You can get 80% of the work done in 20% of the time. The last 20% of work takes 80% of the time.

  • (cs) in reply to SomeCoder
    SomeCoder:
    Mick:
    gcc4ef3r:
    uh, wouldn't the compiler optimize that away? i mean, depending on the intelligence of the compiler...
    I read statements like this again and again, but I'll be damned if I've ever seen a compiler do that. Ok, ok, I've only used the compiler that comes with (various versions of) Visual C++ but it is supposed to optimize some and even with full optimizations on in a release build it's never optimized anything like this away that I've found.
    It will optimize it away. If you just set to Release mode in Visual Studio, that isn't full optimizations. You need to tell it that you want it to run as fast as possible too.

    One time, I tried something like this:

    while (true);

    Set the mode to Release and the optimizations to "Speed" and voila, program runs, no infinite loop.

    If any version of VS really did what you wrote, it's a bug, and should be reported yada yada. Hopefully after you re-read your post, it'll be obvious why. This code doesn't "do nothing". It halts. IIRC, C spec does not allow optimizing halts away.

    Cheers, Kuba

  • Mark (unregistered) in reply to VolodyA! V A
    VolodyA! V A:
    The funny for me was the google ad on this exact page:

    "Slow computer? Speed it up by cleaning your registry..." maybe there is some NumberOfZerosInSpeedupLoop in Windows registry?

    I am positive some large? packages use a loop to make you think your getting some real meat when you install...

    Take Adobe CS2 - I mean how much does an install really have to do? copy some files and set the reg? Why is this taking 10-15 mins!!

    Watch it sometime. No disking, no cpu. Its a scam!

  • Mark (unregistered) in reply to Kain0_0
    Kain0_0:
    chrismcb:
    Alcari:
    Oh, i've done that. Someone complained that sometimes, saving a file was to fast, so he didn't know if it was succesfull, since the statusbar popped in and out to fast.

    Solution: If filesize < 2mb open FakeSaveAnimation.gif end

    There is an ENORMOUS difference between slowing things down for the sake of making you and your department look better, and slowing things down to let the user know something happened.

    I agree with you, to be truly ethical such loops (without another purpose like synchronisation, interfacing with a human, etc) shouldn't be used as a general rule of thumb.

    On the other hand, if your boss is milestone minded and only pays attention to say runtime (or other very small minded and limited statistics) techniques like the above can be useful to show that progress is being made. To be ethical in this case you actually need to have made progress with program faults/refactoring/scaffolding.

    It would also be useful in punishing your bosses for a really crappy design decision, you can simply slow the program down.

    boss: Why is the system so slow? tech: Well you know those changes to the xyz runtime... boss: yes? tech: They don't fit there, we need to move them to abc (the right place) boss: Will that speed it all up again? tech: Pretty sure.

    (Note: you really only should punish them, if you are very certain it won't work/work well. You shouldn't use it when your simply not getting your way, doing something your uncertain of/don't want to do)

    Sometimes you need to get people to respect your knowledge and skill. Sometimes you can only do it by hitting them - even though we don't really want to. , it's the difference between being paid to make a crappy house, and being paid to make the house they really will like and need.

    "Sometimes you need to get people to respect your knowledge and skill. Sometimes you can only do it by hitting them - even though we don't really want to."

    Omg, lol. You lie!

  • forch (unregistered) in reply to Benjamin Normoyle
    Benjamin Normoyle:
    I thought 80/20 was how to completely bullshit situations: when you need a percentage, use either 80 or 20 percent as your number. E.g. : "80 % of this department is slacking off," "20% of companies give free soda to their employees," "20% of experts say YO MAMA."

    It's actually 80% of a projects work is done in 20% of the calculated time, the rest of the 20% takes up 80% of the time ;)

  • StJohn (unregistered)

    I admit doing this on a web application some seven years ago. I just added a wait function, setTimeout(), with random time between say 500 and 5000 ms when saving a form. This way the waiting changed everytime and made it less obvious that the application was unoptimized.

    Then I just tweaked the values down a bit to improve speed.

  • Cbuttius (unregistered) in reply to sweavo

    It isn't a speed-up loop of course, although I would use sleep() to not absorb CPU.

    It's what is known in the market as the discriminatory pricing model, and it works in a way that you extract from people what they are willing to pay.

    So sometimes you purposely provide an inferior product so people who don't want to pay you as much can still have a product that works, whilst the rich ones will pay you more for a superior one.

    Normally that applies where you have a large market and want to sell to as many people as you can, but in this case it is sort of twisted to work for the current employer. Pay us more and we'll make your app faster..

    Of course, here as they are "removing a 0" each time, it will only work until they run out of zeros to remove so they'd better have this in several places.

  • agtrier (unregistered)

    OMG! I think I was working as support for the same application. Seriously!

  • agtrier (unregistered) in reply to forch

    In my experience, the first 80% of the project take up 120% of the time, the other 20% take another 80%.

  • (cs)

    A job like this would be a godsend in the modern day. 1 hour doing menial tasks, and the rest of the time could be spent learning the latest and greatest newfangled JavaScript frameworks or whatever. The only thing better would be a super lax IT environment where you could play games and stuff on your computer.

    Get a fat paycheck for 1 hour of work and 7 hours of doing whatever you want? Fuck yes. Call me unethical all you want, but you know that would be paradise.

  • Lol (unregistered) in reply to Benjamin Normoyle

    Several year late reply, software engineer here from the future. The 80 20 rule is that. 80% of something in a business is caused by 20% of something. It is kind if a joke rule, but seems to hold true nonetheless. E.g 80% of bugs are caused by 20% of employees. 20% of sales are caused by 80% of employees and so on. Wikipedia it. Not sure if we had that in 2008... I think we did but people complained about reliability of citations.

  • Yuval (unregistered)

    I heard that the 80/20 rule, is about gaining 80% success using 20% effort. (and don't try to understand what it means)

  • James (unregistered)

    The 80/20 rule is the Pareto principle (https://en.wikipedia.org/wiki/Pareto_principle). "The Pareto principle (also known as the 80/20 rule, the law of the vital few, or the principle of factor sparsity) states that, for many events, roughly 80% of the effects come from 20% of the causes."

  • Alex (unregistered) in reply to The MAZZTer

    Aren't chars usually only one byte? Which means it could only go to 255?

  • Terje Wiig Mathisen (unregistered)

    The best version of this that I've ever heard about happened around the same timeframe, in a a very high-end application where the specification called for 10 Hz sampling of a position sensor that would then be shown on the screen: My friend the developer didn't think this would provide sufficient feedback so he optimized the machine code to do 20 Hz, but then he simply skipped every odd update.

    A few months later after the actual end users had tried the system they came back to him: "Our users say that it feels too laggy to control, can you please manage to increase the sampling rate?"

    So he opened his code editor, commented out the "test ax,1 / jnz skip" sequence, waited a couple of days and sent back the updated code to great reviews and very happy end users. :-)

  • BestPakistanis (unregistered)
    Comment held for moderation.
  • Derekwaisy (unregistered)
    Comment held for moderation.
  • Jimmyanten (unregistered)

    canada buy prednisone online: http://prednisone1st.store/# can you buy prednisone over the counter

Leave a comment on “The Speed-up Loop”

Log In or post as a guest

Replying to comment #243528:

« Return to Article