• DavidTC (unregistered) in reply to Zecc
    Zecc:
    var idx = 0001 * (bit1 ? 1 : 0)
            + 0010 * (bit2 ? 1 : 0)
            + 0100 * (bit3 ? 1 : 0)
            + 01000 * (bit4 ? 1 : 0);
    

    switch(idx){ case 0: return "0"; case 1: return "1"; case 2: return "2"; //... case 16: return "F"; }

    Yeah, seriously, that's the 'most stupid' way to do it. Anyone who comes up with a stupider method isn't actually a programmer.

    Even people who have no idea that languages usually come with base-changing tools should be able to work 'your method' out unless they literally have no idea what a 'base' is.

    Hell, there's even a more stupid method that would work better. If they can't do the math in one line, they could always have a counter like:

    counter += 4 * bit3;

    That's just four lines of code. Or even, for the seriously math challenged: if (bit3) counter += 4;

    The idea that you would write if statements for a 4! matrix of possibilities if you had any other way to do it is insane. Especially if this was all to set a single value.

  • Mog (unregistered) in reply to brazzy
    brazzy:
    Manos:
    One day, after a code review by one of our 'senior' developers this little beauty showed up...

    #define NUMBER_OF_BITS_IN_BYTE 8

    The justification was... that the byte might change....

    You might want to read up on what a "byte" is and is not: http://en.wikipedia.org/wiki/Byte The only WTF in that is that he didn't use CHAR_BIT in limits.h, which has exactly that content for exactly that reason.

    But... wait... A char doesn't have to be 1 byte right? So wouldn't that be like.... #define NUMBER_OF_BITS_IN_BYTE CHAR_BIT / sizeof(char)

    Long live the {,u}int{8,16,32}_t and similar types

  • Andrew (unregistered)

    How do we know that the javavm isn't just doing this in the backend when we call the one line of code? :)

  • Security Guaranteed (unregistered) in reply to brazzy
    brazzy:
    Actually, in C a byte does not necessarily have 8 bits. There is nothing in the least unusual about needing more than 2 hex digits to represent a byte.

    (cough) you what?

  • (cs) in reply to Mog
    Mog:
    But... wait... A char doesn't have to be 1 byte right?

    It does, sorry.

  • BIG ASS MONKEY (unregistered) in reply to Spectre
    Spectre:
    Mog:
    But... wait... A char doesn't have to be 1 byte right?

    It does, sorry.

    The japanese might disagree with you there.

  • MinorThread (unregistered)

    this code can't be real.. and if it is.. we should hunt him down and remove him from the gene pool...

  • Ville (unregistered) in reply to Mog
    Mog:
    #define NUMBER_OF_BITS_IN_BYTE CHAR_BIT / sizeof(char)
    How would that be appropriate? You could just as well write...

    #define NUMBER_OF_BITS_IN_BYTE CHAR_BIT //sizeof(char) is 1 by definition.

    ..but then you would be assuming that byte equals char. What kind of fouled up logic is that?

  • (cs)

    I once wrote a function in C to convert from any base (up to 36) to any base (up to 36) which had .. what was it, 25 lines or so?

  • (cs) in reply to BIG ASS MONKEY
    BIG ASS MONKEY:
    Spectre:
    Mog:
    But... wait... A char doesn't have to be 1 byte right?

    It does, sorry.

    The japanese might disagree with you there.

    Sucks to be Japanese and having to work with C, where it's defined that way.

  • (cs) in reply to Nobody
    Nobody:
    Manos:
    One day, after a code review by one of our 'senior' developers this little beauty showed up...

    #define NUMBER_OF_BITS_IN_BYTE 8

    The justification was... that the byte might change.... and no 'magic' numbers should appear in the code...

    Actually I can understand this. Not because the byte might change but rather to document the code. Why make the next guy who has to maintain your code have to figure out what the number 8 represents when you can make it glaringly obvious this way?

    TRWTF in these situations is the ridiculously long names. Maybe you can justify not using a "magic number", but you could at leaast name it BITS_PER_BYTE.
  • W. Snapper (unregistered) in reply to ambrosen
    ambrosen:
    Stavros:
    No fewer than!
    Wanna bet‽

    Linguists say no! Or actually, they say "prescriptivist poppycock", but it's much the same thing.

    So now the question is does "up to 10 items" mean 9 items or fewer?
  • Mr.'; Drop Database -- (unregistered) in reply to BIG ASS MONKEY
    BIG ASS MONKEY:
    Spectre:
    Mog:
    But... wait... A char doesn't have to be 1 byte right?
    It does, sorry.
    The japanese might disagree with you there.
    The C standard disagrees with the Japanese there.
  • (cs)

    If you are laughing at this code, have you considered that maybe the developer was trying to write something cross platform that is independent of the proprietary java packages that provide hex functions? I can see how if you have never developed cross platform code it might seem strange.

  • Beryllium (unregistered)

    Dang, I wrote a working demo in powerpoint that was cleaner than this.

  • anonymous coward wk (unregistered) in reply to TopCod3r
    If you are laughing at this code, have you considered that maybe the developer was trying to write something cross platform that is independent of the proprietary java packages that provide hex functions? I can see how if you have never developed cross platform code it might seem strange.

    Java is a PLATFORM.

  • Matt (unregistered) in reply to Mr.'; Drop Database --
    Mr.'; Drop Database --:
    BIG ASS MONKEY:
    Spectre:
    Mog:
    But... wait... A char doesn't have to be 1 byte right?
    It does, sorry.
    The japanese might disagree with you there.
    The C standard disagrees with the Japanese there.

    Sucks to be japanese. They have to use wide-chars which are two bytes long. All of their strings take up twice as much RAM!

  • Ville (unregistered) in reply to Mr.'; Drop Database --
    Mog:
    But... wait... A char doesn't have to be 1 byte right?
    C standard states that byte must have as many bits as char, so char is always exactly 1 byte. But this doesn't mean that char does have to be exactly 8 bits. A char or byte (per C standard's terms) might just as well be 42 bits as long as it's no less than 8 bits. So even Japanese could fit all their alphabets in there, no problem.
  • (cs) in reply to Manos
    Manos:
    One day, after a code review by one of our 'senior' developers this little beauty showed up...

    #define NUMBER_OF_BITS_IN_BYTE 8

    The justification was... that the byte might change.... and no 'magic' numbers should appear in the code...

    Since the number of bits per byte is always 8, the following should be enough:

    #define EIGHT 8

    An advantage of this approach is that it is reusable. You can eliminate magic numbers without having to pollute the code with numerous #defines for NUMBER_OF_BITS_PER_CHAR, NUMBER_OF_BYTES_IN_SIXTY_FOUR_BITS, NUMBER_OF_SIDES_IN_AN_OCTAGON, etc.

  • (cs) in reply to NullAndVoid
    NullAndVoid:
    Manos:
    One day, after a code review by one of our 'senior' developers this little beauty showed up...

    #define NUMBER_OF_BITS_IN_BYTE 8

    The justification was... that the byte might change.... and no 'magic' numbers should appear in the code...

    Since the number of bits per byte is always 8, the following should be enough:

    #define EIGHT 8

    An advantage of this approach is that it is reusable. You can eliminate magic numbers without having to pollute the code with numerous #defines for NUMBER_OF_BITS_PER_CHAR, NUMBER_OF_BYTES_IN_SIXTY_FOUR_BITS, NUMBER_OF_SIDES_IN_AN_OCTAGON, etc.

    Actually, most but not ALL systems have 8 bits in a byte. But your approach is sound, because, for example, if a byte is 7 bits, it lets you do this, and make the change in only one place...

    #define EIGHT 7

  • Matt (unregistered) in reply to TopCod3r
    TopCod3r:
    NullAndVoid:
    Manos:
    One day, after a code review by one of our 'senior' developers this little beauty showed up...

    #define NUMBER_OF_BITS_IN_BYTE 8

    The justification was... that the byte might change.... and no 'magic' numbers should appear in the code...

    Since the number of bits per byte is always 8, the following should be enough:

    #define EIGHT 8

    An advantage of this approach is that it is reusable. You can eliminate magic numbers without having to pollute the code with numerous #defines for NUMBER_OF_BITS_PER_CHAR, NUMBER_OF_BYTES_IN_SIXTY_FOUR_BITS, NUMBER_OF_SIDES_IN_AN_OCTAGON, etc.

    Actually, most but not ALL systems have 8 bits in a byte. But your approach is sound, because, for example, if a byte is 7 bits, it lets you do this, and make the change in only one place...

    #define EIGHT 7

    This has the additional benefit that if for any reason the digit 8 turns out to have the value 7, (presumably due to the a decay in the value 8 leading it to be rounded down during an int cast - these things happen. math isn't as robust as you'd think), you can fix all of the problems in your program in one simple quick fix.

  • ninjabob7 (unregistered)

    My favorite part is that he has 3-4 lines of comments on EVERY LINE of code. "Number of characters per byte" is fairly hilarious. And do those bit arrays actually get used?

  • Todd (unregistered)

    In ninth grade, using an Apple II, and the first language I ever learned (BASIC), I wrote equally bad code to convert Fahrenheit to Centigrade.

    I can forgive myself for being a 14 year old neophyte. I can't forgive this code's author!

  • Ville (unregistered) in reply to NullAndVoid
    NullAndVoid:
    Please upgrade your method to allow balanced ternary input and provide the option for base 42 output.
    Not really hexadecimal though, if base is not 16. Then again, for properly implemented code that would be easy enough change. Only problem would be figuring out what characters to use after Z.
  • (cs) in reply to Matt
    Matt:
    TopCod3r:
    NullAndVoid:
    Manos:
    One day, after a code review by one of our 'senior' developers this little beauty showed up...

    #define NUMBER_OF_BITS_IN_BYTE 8

    The justification was... that the byte might change.... and no 'magic' numbers should appear in the code...

    Since the number of bits per byte is always 8, the following should be enough:

    #define EIGHT 8

    An advantage of this approach is that it is reusable. You can eliminate magic numbers without having to pollute the code with numerous #defines for NUMBER_OF_BITS_PER_CHAR, NUMBER_OF_BYTES_IN_SIXTY_FOUR_BITS, NUMBER_OF_SIDES_IN_AN_OCTAGON, etc.

    Actually, most but not ALL systems have 8 bits in a byte. But your approach is sound, because, for example, if a byte is 7 bits, it lets you do this, and make the change in only one place...

    #define EIGHT 7

    This has the additional benefit that if for any reason the digit 8 turns out to have the value 7, (presumably due to the a decay in the value 8 leading it to be rounded down during an int cast - these things happen. math isn't as robust as you'd think), you can fix all of the problems in your program in one simple quick fix.

    I discovered tonight that this is also very helpful for internationalization. We just landed a contract with a large corporation in Oaxaca, Mexico, and I was tasked with translating our entire framework to Spanish. Sure enough, it turns out Mexican computers use only 7.5 bits per byte (must be cheaper to manufacture) which, over the years, have decayed to 7 bits. I was able to change the above line to

    #define OCHO 7

    and it fixed every problem in our program. My manager let me go home early for the holiday for finishing the conversion so quickly.

  • Drahcir (unregistered) in reply to Matt

    A sentence in English, translated to a language that uses characters (Japanese, Chinese, etc) would take less space than their English counterpart, because theoretically, 1 character could = 1 word. If a particular word was really long, the character equivalent could possibly be only one character, therefore, taking less space. :D

  • h (unregistered) in reply to Drahcir

    And thus, a new form of compression was born.

  • Matt (unregistered) in reply to NullAndVoid
    NullAndVoid:
    Matt:
    TopCod3r:
    NullAndVoid:
    Manos:
    One day, after a code review by one of our 'senior' developers this little beauty showed up...

    #define NUMBER_OF_BITS_IN_BYTE 8

    The justification was... that the byte might change.... and no 'magic' numbers should appear in the code...

    Since the number of bits per byte is always 8, the following should be enough:

    #define EIGHT 8

    An advantage of this approach is that it is reusable. You can eliminate magic numbers without having to pollute the code with numerous #defines for NUMBER_OF_BITS_PER_CHAR, NUMBER_OF_BYTES_IN_SIXTY_FOUR_BITS, NUMBER_OF_SIDES_IN_AN_OCTAGON, etc.

    Actually, most but not ALL systems have 8 bits in a byte. But your approach is sound, because, for example, if a byte is 7 bits, it lets you do this, and make the change in only one place...

    #define EIGHT 7

    This has the additional benefit that if for any reason the digit 8 turns out to have the value 7, (presumably due to the a decay in the value 8 leading it to be rounded down during an int cast - these things happen. math isn't as robust as you'd think), you can fix all of the problems in your program in one simple quick fix.

    I discovered tonight that this is also very helpful for internationalization. We just landed a contract with a large corporation in Oaxaca, Mexico, and I was tasked with translating our entire framework to Spanish. Sure enough, it turns out Mexican computers use only 7.5 bits per byte (must be cheaper to manufacture) which, over the years, have decayed to 7 bits. I was able to change the above line to

    #define OCHO 7

    and it fixed every problem in our program. My manager let me go home early for the holiday for finishing the conversion so quickly.

    The other half a bit crossed the border into the US.

  • anonymous coward (unregistered)

    it works and it is commented. complaints, complaints, complaints.

  • lolek (unregistered) in reply to emddudley

    Of course you know all modern compilers will do such trivial optimization? The only reason to use shifts there is that it is actually easier to read shifts that multiplication when working width bits mask and etc.

  • (cs) in reply to brazzy
    brazzy:
    Manos:
    One day, after a code review by one of our 'senior' developers this little beauty showed up...

    #define NUMBER_OF_BITS_IN_BYTE 8

    The justification was... that the byte might change....

    You might want to read up on what a "byte" is and is not: http://en.wikipedia.org/wiki/Byte The only WTF in that is that he didn't use CHAR_BIT in limits.h, which has exactly that content for exactly that reason.

    Yeah, cause we might port our embedded software to a processor and OS from the 70's.

  • (cs) in reply to poochner
    poochner:
    Leon:
    ambrosen:
    Stavros:
    No fewer than!
    Wanna bet‽

    Linguists say no! Or actually, they say "prescriptivist poppycock", but it's much the same thing.

    Yakka foob mog, grub pubbawup zink watoom gazork. Chumble spuzz.

    And if you don't like those words, you're being prescriptivist. I think they're perfectly valid.

    Indeed, quite cromulent.

    Kinn damrite; well slooshny. Too many self-hlassed odborneys have a right nudny nadarv on the menshy in these pekny dennies. Always trying to zartlatch the poor old ootchy-tell into the deep dark podzemo, and then giving it slzy slzy waa waa waa every time something as yednodooky as a potchy-tanny podstatny comes along. In my humble oppy noppy, they should prestat with the plakat, and if they don't have it rad, that's their dopper jelly, and they can zarstrtch.

    (With appy polly loggies to A. Burgess)

  • JM (unregistered)

    For the record, here is the C# oneliner:

    public static string ToHexString(byte[] bytes) {
      BitConverter.ToString(bytes).Replace("-", "");
    }

    This is, by the way, orders of magnitude faster than using a StringBuilder and String.Format. The only thing that's still faster is the DIY solution (allocate a char[] twice as big as the byte array, use "0123456789abcdef"[bytes[n]]).

  • Matt (unregistered) in reply to BIG ASS MONKEY

    The Japanese might argue that a character doesn't have to be one byte but they'd be wrong to say the same about a char.

  • (cs) in reply to JimBob
    JimBob:
    Bisqwit:
    Well, I dunno much about java, but C... *("0123456789ABCDEF" + ((bit1 << 3) | (bit2 << 2) | (bit3 << 1) | bit4)) should work...
    I like "0123456789ABCDEF"[bit1*8+bit2*4+bit3*2+bit4] slightly better.
    Like it all you want, but all it'll get you is a core dump ;)
    Why? It works fine for me, as in "I actually compiled the code and ran the program", which is apparently more than you did. It's valid C.
  • (cs) in reply to Zecc
    Zecc:
    switch(idx){
    case 0:  return "0";
    case 1:  return "1";
    case 2:  return "2";
    //...
    case 16: return "F";
    }
    So . . . your nibbles run from zero to sixteen? That's impressive! Mine only go up to fifteen. Where do you buy yours?
  • John (unregistered) in reply to mentaldingo
    mentaldingo:
    Roman:
    Perl code examples plz? ;-)
    sprintf("%x",$_);

    Yes, but apart from the '$_', it seems like Perl masquerading as C.

    TIMTOWTDI (See perlfaq4, fifth question)

  • (cs) in reply to Zecc
    Zecc:
    var idx = 0001 * (bit1 ? 1 : 0)
            + 0010 * (bit2 ? 1 : 0)
            + 0100 * (bit3 ? 1 : 0)
            + 01000 * (bit4 ? 1 : 0);
    Oh no! You're assuming that a bit can only be 0 or 1 there. What about being ready to adapt to where that changes?
  • (cs) in reply to Ville
    Ville:
    C standard states that byte must have as many bits as char, so char is always exactly 1 byte. But this doesn't mean that char does have to be exactly 8 bits. A char or byte (per C standard's terms) might just as well be 42 bits as long as it's no less than 8 bits. So even Japanese could fit all their alphabets in there, no problem.
    You fail, but that's because you're assuming that a character fits in a "char". (If you're doing the job properly, you need 31 bits per character. Really. Or to get rid of that assumption which is only an artifact of a misnamed type...)
  • (cs)

    I created a function once to do this for me, but it was much better than this piece of crap. Back then I was working with VB, which does not have functions to work with Hex (that I know of, at least).

    Even if it had, my function was still useful because it worked for any base. It went something like

    Public Function BaseConvert(originalNumber LongInt; originalBase, resultBase Int) as String
    ' don't bother nitpicking, I don't use VB for a while and I know this is
    ' not VB-syntactically correct. But the logic is still valid

    It was about 40 lines long and worked fine, even if you wanted something absurd as converting from base 17 to base 3

  • (cs)

    The Code looks very Enterprisey to me!

    Well done work.

  • (cs)

    The C standard requires that

    char
    must be at least 8 bits. On a PDP-10 (with 36-bit registers),
    char
    is 9 bits.

    Other requirements:

    int
    must be no shorter than
    char
    ,
    long int
    must be no shorter than
    int
    , and
    short int
    must be no longer thatn
    int
    .

    In other words, if you know exactly how many bits you need, include stdint.h and use the u?int(8|16|32|64)_t types.

  • Dan (unregistered)

    Even better would have been a hash table lookup. Then you could extend it anyway you want without having to do any math in the code, and you wouldn't be tied down by how someone else implemented bytes, 1 bit to 842^16 bits.

  • iToad (unregistered) in reply to frustrati
    frustrati:
    brazzy:
    Actually, in C a byte does not necessarily have 8 bits. There is nothing in the least unusual about needing more than 2 hex digits to represent a byte.
    Actually, in C there is no "byte" data type

    Depends on the dialect of C. Some C compilers used for 8-bit microcontroller development may have both a "byte" data type and a "bit" data type. 8-bit processors may support either or both of these data types without having to pack and unpack larger data structures.

  • (cs) in reply to Dalden
    Dalden:
    Sunday Ironfoot:

    Well that would be my implementation. Probably a better /shorter way somewhere.

    What about Integer.toHexString() ?

    I would have used Integer.toHexStringOnMondaysWednesdaysAndAlternateFridays() That is the one Java method people forget about because it is verbose.

  • (cs)

    I smell a new OMGWTF coding contest coming on...

  • Roland (unregistered) in reply to iToad
    iToad:
    frustrati:
    Actually, in C there is no "byte" data type

    Depends on the dialect of C.

    iToad, by saying "C", referred to the ISO standard 9899, which only defines char to be an arithmetic data type having at least 8 significant bits.

    You, frustrati, on the other hand, by saying "C", didn't refer to the ISO standard, but to some implementation of it. Since the "number of bits in a char" is implementation-defined by the C standard, your implementation defines it, probably to 8, maybe to something else. Otherwise, you don't have a C compiler, but only a compiler for "a language based on and looking almost like C".

  • Autarchex (unregistered) in reply to Ayin
    Ayin:
    Not necessarily, depends on the platform. Some platforms only have 1-bit shift operations, and on those platforms, bit1<<3 would need a loop to be implemented, which may be slower than a multiplication.

    Many platforms that lack arbitrary barrel-shift instructions also lack multiply instructions. A "multiplication" will then be implemented as a loop of single shifts and additions. If the compiler is clever then the two implementations will be exactly the same. If not, the "multiply" routine might end up being a larger loop due to inclusion of addition instructions in the loop.

    Ayin:
    Anyway, good optimizers always decide, on a case-per-case basis, which one is faster ... Never try to do the optimizer's work manually!

    Agreed. Unfortunately 99% of the code I write is for microcontrollers, and a lot of the compilers leave something to be desired when it comes to smart optimizers that can shrink your code without breaking the intended functionality.

  • mathew (unregistered) in reply to brazzy
    brazzy:
    Yes. It would thus require a change of the Java standard - not quite as epochal as a "Universe reboot", wouldn't you agree?

    On the other hand, the

    public static final char ZERO = '0';

    suggests that he's ready for the day when they overhaul mathematics and redefine the symbol that represents zero.

  • The Fake WTF (unregistered) in reply to rbonvall
    rbonvall:
    Marcelk:
    I like "0123456789ABCDEF"[bit1*8+bit2*4+bit3*2+bit4] slightly better.
    In C, such should always be written as (bit1*8+bit2*4+bit3*2+bit4)["0123456789ABCDEF"]

    This one is cleaner: (bit18 + bit24 + bit32 + bit4 + "0123456789ABCDEF")

    Bitwise shifts are faster than multiplication. Unless you assume that your compiler will only be used on settings which optimize constant multiples like that.

Leave a comment on “Classic WTF: To the Hexth Degree”

Log In or post as a guest

Replying to comment #:

« Return to Article