• a giant (unregistered)

    I don't think that would work in any language.

  • Calli Arcale (unregistered) in reply to Eam
    Actually, it's a pretty important nitpick. The fact that it's not truly random may very well mean that it almost always has a reasonable value on a particular machine. If the initialization problem isn't as stupidly obvious as this case, the fact that it's not random could make for a real debugging nightmare.

    The horror, the horror....

    Part of the trouble here is that we computer science types have a tendency to abuse the word "random" as if it's synonymous with "arbitrary". Within the scope of this code snippet, the result will be arbitrary, although depending on the full context, it might in fact be quite predictable.

    Hardly anything on a computer is truly random in the purest sense of the word -- even the output of "random" number generators.

  • (cs) in reply to Peter
    Peter:
    GUnit:
    You silly goose, this is a better approach:

    class Something{

    public:

    Something() { int nSomeValue = 42; }
    
    void Accessor(void)
    
    {
    
        int nSomeValue;
    
        while(nSomeValue != 42)
        {  
          printf("%d\n", nSomeValue);
    
          // Problem solved. May take a while though
        }
    }
    

    }

    actually as your not changing the memory location you are looking at or the value in it, thats an infinite loop unless 42 is in there to start with. Unless printf modifies its parameters now :P!

    No, GUnit has it right. It's only a matter of time before cosmic rays flip the right bits so it prints 42. I calculate it will take 7.5 million years. --Rank

  • brendan (unregistered) in reply to Rank Amateur
    Rank Amateur:
    Peter:
    GUnit:
    You silly goose, this is a better approach:

    class Something{

    public:

    Something() { int nSomeValue = 42; }
    
    void Accessor(void)
    
    {
    
        int nSomeValue;
    
        while(nSomeValue != 42)
        {  
          printf("%d\n", nSomeValue);
    
          // Problem solved. May take a while though
        }
    }
    

    }

    actually as your not changing the memory location you are looking at or the value in it, thats an infinite loop unless 42 is in there to start with. Unless printf modifies its parameters now :P!

    No, GUnit has it right. It's only a matter of time before cosmic rays flip the right bits so it prints 42. I calculate it will take 7.5 million years. --Rank

    That would only work if the compiler didn't optimize the code by putting the local nSomeValue into a register. Because if it does, you 'll have to wait more than 7.5 million years.

  • (cs) in reply to Calli Arcale
    Calli Arcale:
    Actually, it's a pretty important nitpick. The fact that it's not truly random may very well mean that it almost always has a reasonable value on a particular machine.

    Part of the trouble here is that we computer science types have a tendency to abuse the word "random" as if it's synonymous with "arbitrary". Within the scope of this code snippet, the result will be arbitrary, although depending on the full context, it might in fact be quite predictable.

    Hardly anything on a computer is truly random in the purest sense of the word -- even the output of "random" number generators.

    True hardware RNGs, like Intel's thermal-noise-based generators, are becoming more common in general-purpose computers and some embedded applications. But those still represent only a tiny fraction of computers.

    In this case, of course, the "arbitrary" value is the result of taking the value of an uninitialized object with automatic storage class. In C, and I believe C++ is the same here, the Standard says that has Undefined Behavior. So a conforming implementation could sample a TRNG, if one is available, and the value could indeed be random.

  • Scooter (unregistered)

    All this time and not a one of you has noticed that the code wouldn't compile anyway, thanks to the missing semicolon after the class declaration.

    I expected better from you all, really I did.

  • 332960073452 (unregistered)

    42 - The answer to the Ultimate Question of Life, the Universe, and Everything!

  • Ohnonymous (unregistered)

    The REAL WTF is that this code is suspiciously named "Something" this and uses a value like "SomeValue" and no one seemed to notice how suspicious this was.

    Obviously, the other coders were just testing whether Clint actually knew how to program.

  • The real wtf fool (unregistered)

    Seems more like a doh! than a wtf to me.

    Easily done when converting a local variable to a member variable and forgetting to get rid of the local declaration, especially when the code is split across header and implementation files and not just a tiny snippet like this one.

  • martin (unregistered) in reply to Harro
    Harro:
    It's a miracle if you do get 42 as a result !

    it's not a miracle, depending on the architecture the chances are either one against 2 by the power of 32 or against 2 by the power of 64.

    no miracle here, it's more of a question when it will happen, not if it will happen.

  • H.Hopo (unregistered) in reply to martin

    This reminds me of the world's best, unbeatable compression algorithm I invented some years ago. The idea is simple. You take the data that is to be compressed as byte array input, which you encode to digits 0-9. Then you begin to calculate some neverending, unrepeating set of numbers, like the value of pi. At each index of pi's decimals, you check if the pi's decimals from that point onward are the same as the input. If they are, you just output the offset to that pi's decimal. If they don't match, continue to next pi's decimal.

    AND you can use this same algorithm to other purposes as well. Some warez dudez for example should be very interested to hear that you can find every program, porn movie etc. ever made in the pi's decimals!! You only need one number: the offset to the beginning decimal! Miraculous, isnt it??

    I'm programming the compression algorith, but it takes some time. I'm still running the same first compression test which I started a few years ago. There might be some bug somewhere and that's why it takes time, but another answer can be that because I have needed my computer to other things as well during these years, I've allocated just a little portion of processing time to that compression algorithm.

    Best regards, Hessu Hopo

  • Anon (unregistered) in reply to H.Hopo
    H.Hopo:
    This reminds me of the world's best, unbeatable compression algorithm I invented some years ago. The idea is simple. You take the data that is to be compressed as byte array input, which you encode to digits 0-9. Then you begin to calculate some neverending, unrepeating set of numbers, like the value of pi. At each index of pi's decimals, you check if the pi's decimals from that point onward are the same as the input. If they are, you just output the offset to that pi's decimal. If they don't match, continue to next pi's decimal.

    AND you can use this same algorithm to other purposes as well. Some warez dudez for example should be very interested to hear that you can find every program, porn movie etc. ever made in the pi's decimals!! You only need one number: the offset to the beginning decimal! Miraculous, isnt it??

    I'm programming the compression algorith, but it takes some time. I'm still running the same first compression test which I started a few years ago. There might be some bug somewhere and that's why it takes time, but another answer can be that because I have needed my computer to other things as well during these years, I've allocated just a little portion of processing time to that compression algorithm.

    Best regards, Hessu Hopo

    Are you serious? While it's highly possible to define a function that maps 256 number into 10, you'd never be able to decompress that. What you're describing is no more than a hash (and not a particularly great one at that)

  • H.Hopo (unregistered) in reply to Anon
    Anon:
    Are you serious? While it's highly possible to define a function that maps 256 number into 10, you'd never be able to decompress that. What you're describing is no more than a hash (and not a particularly great one at that)

    Of course I'm serious. I just forgot to mention, that the decompression part of the software of course also calculates the pi's decimals until it reaches the offset. The compressed file format I use is a normal zip file which consists of two numbers, the offset and length. (Originally it was not zipped, but I decided to zip it because the offset number can be quite long).

    For example, if you would kindly calculate to offset 988294834729838874742083849872728384982989384202384989883492934 (i hope i remeber that right) and take the next 400 000 000 bytes, you will find a marvelous video clip. I don't think that is "real" clip in the sense that it is not human made but instaed of pure coincidence. It is still watchable (seems to be avi format), even though there is some bad quality in the video from time to time. It contains somewhat strange humor with Benny Hill like appearances. (BUt don't play 05:37 out loud near children, as the clip takes a surprising "hardcore"-move!)

  • Err (unregistered) in reply to DrYak
    DrYak:
    Cube:
    I would think that newer compilers would have a warning about shadowing in this situation, but maybe warnings were disabled.

    Test.c :

    #include <stdio.h>
    #include <stdlib.h>
    
    int main () {
        int bork;
    
        printf("bork : %d\t%08x\n", bork, bork);
        exit (0);
    }

    gcc -pedantic -Wall -x c -std=gnu99 -o test test.c

    gcc -pedantic -Wall -x c++ -std=gnu++98 -lstdc++ -o test test.c

    Then :

    ./test

    bork : -1208837232 b7f29b90

    Using latest extension, neither C nor C++ version did complain about non-initialised memory.

    Good practice should recommend to always initialise variables to something. In case where that value is unused, the optimizer will drop useless writing to variable (unless variable is of 'volatile' type and its value could be accessed by background procedure between two writes).

    But compiler should detect this situations (they are already able to keep track of this kind of stuff for their optimizers) and emit a wraning, because this could lead to data corruption. (Imagine if the wasn't an int but a pointer. That the code checked for NULL before writing some data, and that the memory spot was previoulsy containing some valid memory address - like the pointer from some critical piece of data).

    Think this is down to use of printf. When using a function with varargs, the compiler can't do most of its checking. Try again using std::cout and see if that doesn't complain.

  • (cs) in reply to Stormy
    Stormy:
    The real WTF here is that the site is called "Worse Than Failure". When I looked at the code, I said out loud, "What the F*ck?", not "What the Failure".

    No. The real "What the F*ck" here is people who complain about the name of a website that doesn't belong to them. If you want to name a website, register a domain name - you can call the site you put on that domain whatever you want.

  • The real wtf fool (unregistered) in reply to KenW
    KenW:
    No. The real "What the F*ck" here is people who complain about the name of a website that doesn't belong to them. If you want to name a website, register a domain name - you can call the site you put on that domain whatever you want.

    Nah. The real WTF is that alex never thought to tell granny papadopolis that the name of the site is the daily worse than failure. Could've saved himself all that domain redirection hassle.

    captcha: wtfisthethoughtbehindputtingthecaptchainthemsg

  • (cs) in reply to H.Hopo
    Anon:
    Are you serious? ... What you're describing is no more than a hash

    I suppose if you're going to write something that's wildly incorrect, you might as well do so anonymously.

    H.Hopo:
    Of course I'm serious.

    For example, if you would kindly calculate to offset 988294834729838874742083849872728384982989384202384989883492934 (i hope i remeber that right) and take the next 400 000 000 bytes, you will find a marvelous video clip. I don't think that is "real" clip in the sense that it is not human made but instaed of pure coincidence. It is still watchable (seems to be avi format), even though there is some bad quality in the video from time to time. It contains somewhat strange humor with Benny Hill like appearances. (BUt don't play 05:37 out loud near children, as the clip takes a surprising "hardcore"-move!)

    Wrong offset. That data's MPEG Level-3 audio of George Bush singing "God Didn't Make Little Green Apples", accompanied by Dick Cheney on the harpsichord. You're thinking of offset 873987492897029830498206869598729374928.

    (Alas, I think CS humor is largely wasted here. Gotta like people who post completely incorrect responses anonymously, though.)

  • Corporate Cog (unregistered)

    Bravo! A real wtf! It's been so long... I was beginning to think there were none left (only sub-optimal working solutions).

  • (cs)

    With infinite computing power, that Pi compression algorithm would actually work!

    Is it bad that I don't see the WTF in the getName() char* example on page 1? Is it that getNameFromSomewhereElse might expect to be able to write more than 254 chars and a null?

  • Brickett Ranaculus (unregistered) in reply to Bob Janova
    Bob Janova:
    With infinite computing power, that Pi compression algorithm would actually work!

    Is it bad that I don't see the WTF in the getName() char* example on page 1? Is it that getNameFromSomewhereElse might expect to be able to write more than 254 chars and a null?

    Yes, it's very bad.

  • (cs) in reply to Bob Janova
    Bob Janova:
    With infinite computing power, that Pi compression algorithm would actually work!

    Is it bad that I don't see the WTF in the getName() char* example on page 1? Is it that getNameFromSomewhereElse might expect to be able to write more than 254 chars and a null?

    You mean the one where it returns a pointer to a local variable?

  • (cs) in reply to H.Hopo
    H.Hopo:
    Of course I'm serious.

    For example, if you would kindly calculate to offset 988294834729838874742083849872728384982989384202384989883492934

    I'm thinking with this that the offset that you end up with will most likely be close to or larger than the original data that you're trying to compress. I tried a similar thing once on a friend's suggestion, the idea was to regard the whole data block as a huge number, and then try and make an equation the result of which was that number. We ended up trying factorials, so if you were to compress:

    634730233097352716362370928332162373827372637

    You get:

    cmd:
    D:\>inttest number.txt -f Loading number from number.txt... Finding factors: (1*38!) + (8*37!) + (4*36!) + (10*35!) + (21*34!) + (9*33!) + (30*32!) + (22*31!) + (2*30!) + (13*29!) + (19*28!) + (16*27!) + (6*26!) + (23*25!) + (21*24!) + (18*23!) + (6*22!) + (15*21!) + (19*20!) + (16*19!) + (12*18!) + (5*17!) + (11*16!) + (12*14!) + (9*13!) + (11*12!) + (7*11!) + (10*10!) + (7*9!) + (5*8!) + (1*7!) + (6*6!) + (5*5!) + (4*4!) + (3*3!) + (1*2!) + (1*1!)

    Contains 37 factors with an assessed size:

        List of roots/multipliers - 74 bytes
        List of sequential multipliers of roots - 37 bytes
        List of sequential multipliers of roots, bit packed - 22 bytes
    

    Root values range from 1 to 38 1 multipliers in output sequence would be zeros.

    Assessed compression from 20 bytes is: Pairs mode - 370.0% Seq. multipliers mode - 185.0% Packed seq. multipliers mode - 110.0%

    The bit at the end shows how big the "compressed" data would be after the process. Now I might by sheer dumb luck come up with a number that could compressed to (15266!) + (172282!) but it really didn't seem that likely. The closest I ever got to was the bit packed results being about 103% the size of the original data.

    I'm certainly nowhere near knowledgable in this field, but I came to the conclusion that a lossless compression algorithm that would get good results on all possible data sets must be impossible. Whatever program you make would have to be deterministic, meaning it would always get the same output from the same input. Therefore if your data set had 15*(10^800) permutations that can all be output by the decompressor, you need to have 15*(10^800) permutations in the dataset that you input into the decompressor. Given that, it would stand to reason that the average size of your input data would be the same as the average size of the output...

    If it did work of course, then you should be able to recompress the compressed data and still make it smaller. Over and over again. This would be fantastic, since then I could compress my entire DVD collection into the number 42 and give it to all my friends written on the back of a postage stamp. What you gonna do about that MPAA? Let the anarchy begin!

  • H.Hopo (unregistered) in reply to Devi
    Devi:
    If it did work of course, then you should be able to recompress the compressed data and still make it smaller. Over and over again. This would be fantastic, since then I could compress my entire DVD collection into the number 42 and give it to all my friends written on the back of a postage stamp. What you gonna do about that MPAA? Let the anarchy begin!

    You have very good points there, my friend. However, these problems ARE solvable! Why is JPG compression so efficient? Because it is not lossless! This is the key.

    I have been programming such a "loss inducer" for my compressor project for some time now. The thing works basically so that during compression the compressor, which goes through the pi's offsets, asks the loss inducer "is this good enough". The loss inducer checks and if it's good enough it is accepted. Let's be realistic -- we can't be perfectionists here, can we!

    This brilliant method greatly enhances the chances of finding an offset early in the decimals. The difficult part has been programming the logic of how the inducer can take ANY input and find out the greatest amount of loss of data as possible, but still provide value to the customer. My architecture is such that I have different plugin classes for every file type that exists, like AVIPlugin, PDFPlugin, WindowsMetafilePlugin, PaintShopPro4_1_2Plugin etc. These plugins know just how f*ked up the actual file can be but still provide business value to the customer.

    Good luck, Hessu Hopo

  • Unknown (unregistered)
    Good practice should recommend to always initialise variables to something...

    I hate idiotic comments like this. There is a very good reason for having a distinction between initialization (notice it's spelled with a z) and declaration. Just because there are lots of people who cannot utilize this power, that's their problem.

    // assume some kind of embedded device, where we cannot afford more than 256 bytes for this information char address[256]; // input from user // store input into address ( of course with boundary check )

    That is much more preferred than initializing every one of the 256 chars to zero.

    Or what about the paging inside the OS? What, we should initialize a page with zeros, before loading anything into that memory location? I don't think so.

    There are very little 'rules' that should be done all the time, relative to the number of 'rules' that there will be an exception for. If you are teaching first year students, then you can start of with generalizations like this, but it's not acceptable past that.

  • Unknown (unregistered)
    Why is JPG compression so efficient? Because it is not lossless! This is the key.

    No, it's because of the nature of the data. English text is compressed quite well, and it is lossless. Try running your JPG compression on data that is already in compressed form, and you will get squat.

  • ddf (unregistered)

    ThisTing.Something(42) ??

  • iw (unregistered) in reply to H.Hopo

    That's great until you realize that for some files the value of the offset is longer than the 0-9 string itself.

  • iw (unregistered) in reply to H.Hopo
    H.Hopo:
    This reminds me of the world's best, unbeatable compression algorithm I invented some years ago. The idea is simple. You take the data that is to be compressed as byte array input, which you encode to digits 0-9. Then you begin to calculate some neverending, unrepeating set of numbers, like the value of pi. At each index of pi's decimals, you check if the pi's decimals from that point onward are the same as the input. If they are, you just output the offset to that pi's decimal. If they don't match, continue to next pi's decimal.

    AND you can use this same algorithm to other purposes as well. Some warez dudez for example should be very interested to hear that you can find every program, porn movie etc. ever made in the pi's decimals!! You only need one number: the offset to the beginning decimal! Miraculous, isnt it??

    I'm programming the compression algorith, but it takes some time. I'm still running the same first compression test which I started a few years ago. There might be some bug somewhere and that's why it takes time, but another answer can be that because I have needed my computer to other things as well during these years, I've allocated just a little portion of processing time to that compression algorithm.

    Best regards, Hessu Hopo

    That's great until you realize that for some files the value of the offset is longer than the 0-9 string itself.
  • (cs) in reply to Unknown
    Unknown:
    Or what about the paging inside the OS? What, we should initialize a page with zeros, before loading anything into that memory location? I don't think so.

    Fortunately, people who've given the matter a bit more thought do think so, if the physical memory backing that page was previously used by a different process. In fact, it's an Orange Book requirement (at C2 and above, I think) called "Object Reuse".

    "Initialize everything"[*] is naive, but it's equally foolish to make sweeping generalizations about cases where initialization is inadvisable.

    [*] Or "initialise", for those who use the British spelling, which is a perfectly valid choice, contra our unknown friend.

  • N Morrison (unregistered)

    I too felt intimidated for the first few weeks. I spent a whole day my first week trying to load one line of data into the Oracle database against all of the triggers and constraints that were running.

    Then I posted up in my cubicle a few lines from The Mikado: - "On a cloth untrue With a twisted cue And elliptical billiard balls."

    One young lady new to this country and company asked me what it meant. I told her she would find out. About 6 weeks after she started, she came up and grabbed me by the arm and said, "I understand. About the billiard balls. I understand now".

    Still makes me laugh.

  • OM (unregistered) in reply to DrYak

    Use an optimization flag, like -O2. gcc does not do data flow ana;ysis without -O

  • grumpy (unregistered)

    It's not a miracle if it prints 42. Seems to me it would probably happen fairly often. If you first call the constructor, it will initialize a local variable on the stack to 42.

    Now you call the accessor, which also uses a local variable from the stack, but doesn't initialize it. The odds are pretty good that it'll then use the same memory address as the constructor did, and if you haven't called any functions in between, the value 42 will probably still be there. So with a bit of care, a lot of luck, and the right compiler implementation, it will print 42 fairly often (at least in small test cases)

  • (cs) in reply to Newbius Maximusq
    I would think that newer compilers would have a warning about shadowing in this situation, but maybe warnings were disabled.
    ...or just ignored. Some of my co-workers at a previous job didn't think there was any reason to worry if their application compiled with (literally) 1000 warnings - hey, it compiled, right? What could POSSIBLY go wrong?

    ...or, at a previous contract, "you know the build has succeeded if there are exactly 213 warnings". If you go fix some, it screws the build up for everyone else!

  • Tudor (unregistered)

    It's the WTF comment that makes it funny. Otherwise, it's just a bug, which (I'm ashamed to say) I made myself.

    I was writing some code to do exponential backoff. Deep in the bowels of some function, there was (this is pseudo-code-ish):

    while (...) {
      ConnectToServer();
      if (!ServerIsBusy()) break;
      int delay = ExponentialBackoff(...);
      Sleep(delay);
    }
    

    But then (I don't remember why) I wanted to be able to have a constant delay (rather than exponential), so the code became (cut and paste to the rescue):

    int delay = 0;
    if (use_exponential_backoff) {
      int delay = ExponentialBackoff(...);
    } else {
      delay = something_else;
    }
    

    Needless to say, the maintainers of the server I was connecting to came crashing down the first time I ran this in production with use_exponential_backoff = true.

  • Marc (unregistered)

    I wonder how often they tried until 42 appeared :)

  • Unknown (unregistered) in reply to MichaelWojcik

    Good idea. While we are at it --- why not initialize every memory location to zero during the boot process, before loading the os.

    How about No.

  • Unknown (unregistered) in reply to MichaelWojcik
    MichaelWojcik:
    Unknown:
    Or what about the paging inside the OS? What, we should initialize a page with zeros, before loading anything into that memory location? I don't think so.

    Fortunately, people who've given the matter a bit more thought do think so, if the physical memory backing that page was previously used by a different process. In fact, it's an Orange Book requirement (at C2 and above, I think) called "Object Reuse".

    "Initialize everything"[*] is naive, but it's equally foolish to make sweeping generalizations about cases where initialization is inadvisable.

    [*] Or "initialise", for those who use the British spelling, which is a perfectly valid choice, contra our unknown friend.

    The last comment was a response to this.

  • Philippe Schober (unregistered) in reply to Marc
    Marc:
    I wonder how often they tried until 42 appeared :)
    One, I would say.

    As long as you call the Accessor function right after the constructor the chances are high that you get the 42 you wish.

    Why? Because the first value in the constructor will be stored at memory position X which will be freed afterwards. When you now call the Accessor it will reserve the same part of the memory for the variable (namely X). Thats why "sometimes" the result is 42.

    But when you reserve any memory in the meantime, the result will vary.

    Quite easy. You get worse and harder to find errors when you write the String "x" into an character array of length 1. Then things begin to get funny....

  • Guy on the left (unregistered) in reply to brendan

    ummm... that code will print everything at that memory location UNTIL it somehow miraculously changes to 42. It never actually prints 42

  • Guy on the left (unregistered) in reply to brendan
    brendan:
    Rank Amateur:
    Peter:
    GUnit:
    You silly goose, this is a better approach:

    class Something{

    public:

    Something() { int nSomeValue = 42; }
    
    void Accessor(void)
    
    {
    
        int nSomeValue;
    
        while(nSomeValue != 42)
        {  
          printf("%d\n", nSomeValue);
    
          // Problem solved. May take a while though
        }
    }
    

    }

    actually as your not changing the memory location you are looking at or the value in it, thats an infinite loop unless 42 is in there to start with. Unless printf modifies its parameters now :P!

    No, GUnit has it right. It's only a matter of time before cosmic rays flip the right bits so it prints 42. I calculate it will take 7.5 million years. --Rank

    That would only work if the compiler didn't optimize the code by putting the local nSomeValue into a register. Because if it does, you 'll have to wait more than 7.5 million years.

    ummm... that code will print everything at that memory location UNTIL it somehow miraculously changes to 42. It never actually prints 42

  • Guy on the left (unregistered) in reply to Guy on the left

    Oh man, I forgot I was reading archived WTFs. Why am I still commenting here?

  • Oscar Carserud (unregistered)

    You al miss the point! They clearly whantet do know if the IT guy could understand the problem, as a test of his knowledge.

Leave a comment on “In the Shadow of Giants”

Log In or post as a guest

Replying to comment #:

« Return to Article