• aaaaaaaand_he_prays (unregistered)

    if (comment number == 0) { if comment number <= 65535 { RETURN FRIST! } }

  • TheCPUWizard (unregistered)

    Well... I would have used Hex constants.. Of course there are many implementations, but it could be interesting to do both speed and memory footprint comparisons as well as knowing the environment / use-case.

  • Quite (unregistered)

    Why not output it as a binary string and count the position of the rightmost 1?

  • Quite (unregistered)

    I mean leftmost, of course. Duh.

  • Ross (unregistered)

    Why isn't it sorted properly?

  • O(log n) (unregistered)

    It wasn't written for 16 bits then extended; it was written for 32 bits and performs a binary chop.

  • George Gonzalez (unregistered)

    Could be worse. I've seen some public-domain weather-predicting code that in about 13 places computes Log2 by using natural logs and dividing by a constant. It gets the answer right, most of the time. Well, not for certain rare values like "8", whose Log2 is 2.9999999..

  • (nodebb)

    It's also bugged, in that according to it, log2(0) is 0. If you pass it zero, it should explode in some way. I recommend creating an exception class called "up" with a default constructor, assuming C++, so you can write

        throw up;```
    
    
    **Addendum 2016-07-26 07:54:**
    Does MarkDown work or not work in this [REDACTED] system?  Apparently, the answer is "yes, it does indeed work or not work, at its whim."
    
  • Rudolf Polzer (google)

    I did exactly this before:

  • dwaz (unregistered)

    "At least here, we have the assurance that the universe's biggest numbers all have a log base 2 of 31."

    This is an odd complaint, seeing as the argument is a (32 bit) signed integer, and thus can never be higher than base 31 (rounded down).

    The handling of 0 or negative is all wrong of course, but for all non-zero, positive numbers it seems to work as intended.

  • (nodebb) in reply to George Gonzalez

    Well, not for certain rare values like "8", whose Log2 is 2.9999999.

    Hey, it's within a reasonable epsilon!

  • Ross (unregistered) in reply to Quite

    "Why not output it as a binary string and count the position of the rightmost 1?"

    Because that would take MORE clock cycles than this nested if construction. Really, if this was done just a little better then it's a pretty good implementation.

  • (nodebb) in reply to Ross

    How about while loop as the integer is greater than 0, bit shifting, incrementing a return value.

    That'd be a heck of a lot more efficient way to count the last 1 then this nested if hell.

    static int Log2(int x) { if (x <= 0) throw new InvalidOperationException("Log2 only supports positive values.");

    int i = -1;
    while(x > 0)
    {
        x = x >> 1;
        i++;
    }
    return i;
    

    }

  • KCE (unregistered) in reply to George Gonzalez

    Mathematically this is valid: log_2(x) = ln(x)/ln(2). So the constant was probably ln(2). If they didn't have enough decimal places or had a typo for their hard coded ln(2) the results would be off.

  • Maciej (unregistered) in reply to lordofduct

    Would it be faster though? In the worst case the loop has to perform 31 steps, while the "nested if hell" will find a solution in 6 steps max.

  • Joseph Osako (google)

    A return type of int for a logarithm? Seriously? What are they using that for? Yes, I can see the possible use of getting the order of highest significant bit in a value, but calling it the integral log2(), while technically correct, is misleading as hell. Calling the function something more meaningful such as high_byte() would be clearer, and might even help you see why the algorithm being used is (to channel Jim Fucking Sterling, Son, for a moment) a pile of red raw anuses.

    Addendum 2016-07-26 10:14: I means high_bit(), sorry.

    Addendum 2016-07-26 10:15: And now I are spellar bad. Fuck.

  • Foo AKA Fooo (unregistered) in reply to lordofduct

    That's not more efficient! Shorter to write, yes, but less efficient. As Maciej says, the original code may be 5 times faster. If used in a tight loop, a 5* speedup might well matter. We don't know the context (and given the misleading statements about "two halves" (cf. the "O(log n)" comment and about "the universe's biggest numbers"), I doubt the OP would even recognize the context if it stared at him). So, sorry, no WTF as posted!

    Sure, it might be written somewhat more readable using hex constants and, dare I say, even ?:, but the only real WTF in this posting is the inconsistent markup of return statements.

  • Rich Hendricks (unregistered)

    8 space tabs? Burn the heretic!

  • (nodebb)

    You could probably do a binary search loop if you really want, something like this:

    int log2(int n) {
        int l = 0;
        for (int i = 16; i; i >>= 1) {
            if (n >>> i) {
                n >>>= i;
                l += i;
            }
        }
        return l;
    }
    
  • Pietro Gagliardi (andlabs) (unregistered)

    I'm pretty sure this very function is discussed in Code Complete, with a much cleaner if-based solution that doesn't burn as many cycles as the while solution. Having log2(0)=0 makes sense for some common uses of this function (for instance, rounding values up to the nearest power of 2).

    @KCE: The problem is floating-point computation isn't necessary ideal; some combination of operations will generate not-quite-exact values that round up to the correct value.

  • isthisunique (unregistered)

    What rolls down stairs alone or in pairs, rolls over your neighbour's dog? What's great for a snack and fits on your back? It's Log, Log, Log!

    It's Log, Log, it's big, it's heavy, it's wood. It's Log, Log, it's better than bad, it's good! Everyone wants a log! You're gonna love it, Log! Come on and get your log! Everyone needs a Log!

  • (nodebb)

    public static int z=0; public static double slog(double y) { if ( y <=1 ) { return z;} else { z=z+1; return slog(y/2);} }

    Does the same thing, short/sweet, blahblahblah. Still no idea why you'd want this function. I ran this using 8192 and got the expected 13. I then ran this using 8194 and got 14, as the WTF in this article would do, but I still don't get it. log_2(8194) will be 13.000352177480302, and so to get a return of 14 is just wrong.

    Anyone think of a use for this, or is it maybe an attempt at log(x)/log(2)??

    Addendum 2016-07-26 11:32:

    WTF with the format??

    Addendum 2016-07-26 11:33: Aahhh.... I didn't use CODE tags. WTF??

  • Ext3h (unregistered) in reply to lordofduct

    How about while loop as the integer is greater than 0, bit shifting, incrementing a return value.

    Ones does not simply bit shift. Seriously, don't. Not unless you really need to. And you definitely don't if you are just doing integer division by 2.

    Replacing division by bit shift manually is smart, but it isn't wise. It breaks the very second should the endianess change. Or someone decides to change the data type. May have worked for an integer, but breaks horribly when applied to a double.

    Leave the smart optimizations to the compiler, and stick to writing semantically sensible code.

  • Carl Witthoft (google) in reply to George Gonzalez

    methinks YOU are the WTF in that particular case. If x is a float, even if x is supposed to be 2^n, log2(x) will only work out to an integer if the guts of log2() checks for such things.

  • Carl Witthoft (google) in reply to Joseph Osako

    very much this. GG is complaining about 3 vs. 2.9999999, and here we get log2(14) is equal to log2(16) . oh boy.

  • Herby (unregistered)

    For problems like this, the shift operator is your friend. Its pals are the bitwise 'and' and bitwise 'or'.

    Note to original programmer: You need a couple of clues to do this "properly". Please obtain such items.

    Of course you could look it up in a table. I worked on some 8 bit microprocessor that did exactly that!

  • (nodebb) in reply to Carl Witthoft

    Xackly!!

  • Joseph Osako (google) in reply to Ext3h

    One does not simply bit shift their way into Mordor. We need a plan!

  • Benito van der Zander (google)

    <= becomes CMP which is a subtraction, does it not?

    I wonder if it was still faster to replace it with & after one x = x - 1, e.g. x <= 65536, becomes (x & 0xFFFF0000) == 0

  • JG (unregistered) in reply to Herby

    IIRC The original DOOM had lookup tables for various functions.

  • Paul Neumann (unregistered) in reply to Pietro Gagliardi (andlabs)

    @Pietro Gagliardi (andlabs), I knew it looked familiar. -- https://books.google.com/books?id=LpVCAwAAQBAJ&lpg=PR1&dq=code%20complete&pg=PA633#v=onepage&q&f=false

    Unfortunately the refactored version -- which we see in the story -- is on page 634 which is not available for preview. It's fantastic that we see an example from a book on clean coding as a feature on TDWTF!

  • Paul Neumann (unregistered) in reply to Paul Neumann
  • Paul Neumann (unregistered) in reply to Herby

    @Harby, Please describe this look-up table function you speak of. I am intrigued how different it might look.

  • Koen van Dratel (unregistered) in reply to urkerab

    Nice... Your code elegantly describes & implements the algorithm used by the author of the if-then-else-tree (which is at OK & ¬WTF BTW).

  • (nodebb) in reply to Ext3h

    I wasn't saying it was sensible.

    I was saying that it wasn't horribly slow.

    Addendum 2016-07-26 14:00: I was doing it in comparison to stringifying or some other thing, because from the context that sounded like what the original suggestion was.

  • Roboticus (unregistered)

    int intLn2(int val) { if (val < 1) throw "Bad input value!"; int rv = 0; if (val & 0xffff0000) { rv += 16; val >>= 16; } if (val & 0xff00) { rv += 8; val >>= 8; } if (val & 0xf0) { rv += 4; val >>= 4; } if (val & 0xc) { rv += 2; val >>= 2; } if (val & 0x2) { rv += 1; val >>= 1; } return rv; }

  • Roflcopter (unregistered) in reply to Ext3h

    "Replacing division by bit shift manually is smart, but it isn't wise. It breaks the very second should the endianess change."

    LOL, I love it when the people who have no clue try to sound authoritative. Arithmetic operations occur in the registers which have no endianness, that's only a concern when you load and store.

  • foxyshadis (unregistered)

    Geez, guys, none of you have ever heard of the the BSR/BFFFO instruction that was added in i386/68020, to make this a one-instruction operation? Every other CPU instruction set has the equivalent inverted operation, meaning you can subtract it from the operand size for a two-instruction result. No matter what, it's always going to be faster than repeatedly testing and setting values.

    Arguing over whether a loop or a binary tree is more efficient in a tight inner loop is rather missing the forest for the trees. A real programmer would work out how to generate the one or two op function instead of creating this elegant but slow mess.

  • Jonathan Harston (google)

    Somebody's had SingleEntrySingleExit beaten into them too hard. If you really really really want to do this it's the perfect case for multiple exits: if a<1 return 0 if a<2 return 1 if a<4 return 2 if a<8 return 3 if a<16 return 4 etc.

  • George Gonzalez (unregistered) in reply to KCE

    This isn't theoretical math, it's inside a computer, where floating-point division is never going to give you the right answer.

  • John Wiltshire (google)

    If people are going on about instruction counts, then the x86/x64 architecture already has a single instruction that does the heavy lifting for you. It's called "Bit Scan Reverse" (BSR).

    http://x86.renejeschke.de/html/file_module_x86_id_20.html

    ARM has a similar instruction and GCC has appropriate intrinsics.

    Otherwise, aside from the structure, a branching if statement is pretty much the most efficient thing you can do on a 32 bit input. I do mostly agree with Jonathan Harston though - SESE is the wrong answer here and an unrolled loop of returns starting from the biggest number down is best (assuming uniform distribution of inputs).

    Doh - foxyshadis beat me to it.

  • KCE (unregistered) in reply to George Gonzalez

    I'm well aware of that. No log function is going to give an exact answer due to floating point error and approximation. My comment was to state that computing a log base 2 by taking a ln and then dividing by a constant is not worse than the code in the article. In fact it's probably better if they wanted a floating point value return instead of an integer. And both floating point error and approximation are most likely the reason that the original comment I responded to gave a result of 2.9999999999 for log base 2 of 8.

  • Some random reader (unregistered)

    Bit Twiddling Hacks has a section on how to find base 2 logarithms. It starts with the obvious bit-shifting while loop and gets more interesting from there. My favorite is the De Bruijn sequence lookup table. https://graphics.stanford.edu/~seander/bithacks.html#IntegerLogObvious

  • löchlein deluxe (unregistered) in reply to Ross

    My guess: they ran statistics on what values actually occurred and optimized for common cases. Yes, this would need comments.

  • gnasher729 (unregistered) in reply to Ext3h

    The result of bit shift changes when endianness changes? Who comes up with nonsense like that? Seriously, that's the biggest nonsense that I've ever heard. Bit shifts are defined in terms of values., not in terms of the representation. 100000 >> 7 for example will give exactly the same result, whatever the endianness of the processor is.

  • yk (unregistered)

    int log2_integral(int n) { union {double d; unsigned char c[8];} u; u.d=n; return (u.c[6]>>4)+(u.c[7]<<4)-1023; }

  • owlman (unregistered) in reply to Jonathan Harston

    And then you'd have at most 32 steps, with an average of 16 steps. More readable? Yes. More efficient? Not at all.

  • Foo AKA Fooo (unregistered) in reply to yk

    Now, that is endianness-dependent (and wrong for denormals), thanks for demonstrating.

  • tgape (unregistered) in reply to Maciej

    One can always blend the approaches.

    int i = 0; while(x >= 256) { x /= 256; i += 8; } if (x >= 16) { if (x >= 64) { return i + 6 + (x >= 128); } else { return i + 4 + (x >= 32); } } else if (x >= 4) { return i + 2 + (x >= 8); } else { return i + (x >= 2); } return i;

    This uses 4-8 comparisons. If the numbers are spread evenly over the range, the average performance will be worse, but if the numbers are sufficiently skewed to the low end, the average performance could be better.

    This is one of those functions that is simple enough having a standard cache for the values doesn't make sense; the hash computation will probably be more expensive than the function. But if its performance has a serious impact on overall performance, it's likely one or more of the algorithms using the log_2 routine so much could be modified to store log_2 values in a local variable somewhere to save some of those recalculations, which probably would have more of an impact than whether this routine does an average of 30 comparisons or 6.

  • Erlando (unregistered)

    public static int Log2(int x) { return Math.Ceiling(Math.Log(x, 2)); }

Leave a comment on “It's Log, Log, Log”

Log In or post as a guest

Replying to comment #:

« Return to Article