 Feature Articles
 CodeSOD
 Error'd
 Forums

Other Articles
 Random Article
 Other Series
 Alex's Soapbox
 Announcements
 Best of…
 Best of Email
 Best of the Sidebar
 Bring Your Own Code
 Coded Smorgasbord
 Mandatory Fun Day
 Off Topic
 Representative Line
 News Roundup
 Editor's Soapbox
 Software on the Rocks
 Souvenir Potpourri
 Sponsor Post
 Tales from the Interview
 The Daily WTF: Live
 Virtudyne
Admin
if (comment number == 0) { if comment number <= 65535 { RETURN FRIST! } }
Admin
Well... I would have used Hex constants.. Of course there are many implementations, but it could be interesting to do both speed and memory footprint comparisons as well as knowing the environment / usecase.
Admin
Why not output it as a binary string and count the position of the rightmost 1?
Admin
I mean leftmost, of course. Duh.
Admin
Why isn't it sorted properly?
Admin
It wasn't written for 16 bits then extended; it was written for 32 bits and performs a binary chop.
Admin
Could be worse. I've seen some publicdomain weatherpredicting code that in about 13 places computes Log2 by using natural logs and dividing by a constant. It gets the answer right, most of the time. Well, not for certain rare values like "8", whose Log2 is 2.9999999..
Admin
It's also bugged, in that according to it, log2(0) is 0. If you pass it zero, it should explode in some way. I recommend creating an exception class called "up" with a default constructor, assuming C++, so you can write
Admin
I did exactly this before:
Admin
"At least here, we have the assurance that the universe's biggest numbers all have a log base 2 of 31."
This is an odd complaint, seeing as the argument is a (32 bit) signed integer, and thus can never be higher than base 31 (rounded down).
The handling of 0 or negative is all wrong of course, but for all nonzero, positive numbers it seems to work as intended.
Admin
Hey, it's within a reasonable epsilon!
Admin
"Why not output it as a binary string and count the position of the rightmost 1?"
Because that would take MORE clock cycles than this nested if construction. Really, if this was done just a little better then it's a pretty good implementation.
Admin
How about while loop as the integer is greater than 0, bit shifting, incrementing a return value.
That'd be a heck of a lot more efficient way to count the last 1 then this nested if hell.
static int Log2(int x) { if (x <= 0) throw new InvalidOperationException("Log2 only supports positive values.");
}
Admin
Mathematically this is valid: log_2(x) = ln(x)/ln(2). So the constant was probably ln(2). If they didn't have enough decimal places or had a typo for their hard coded ln(2) the results would be off.
Admin
Would it be faster though? In the worst case the loop has to perform 31 steps, while the "nested if hell" will find a solution in 6 steps max.
Admin
A return type of int for a logarithm? Seriously? What are they using that for? Yes, I can see the possible use of getting the order of highest significant bit in a value, but calling it the integral log2(), while technically correct, is misleading as hell. Calling the function something more meaningful such as high_byte() would be clearer, and might even help you see why the algorithm being used is (to channel Jim Fucking Sterling, Son, for a moment) a pile of red raw anuses.
Addendum 20160726 10:14: I means high_bit(), sorry.
Addendum 20160726 10:15: And now I are spellar bad. Fuck.
Admin
That's not more efficient! Shorter to write, yes, but less efficient. As Maciej says, the original code may be 5 times faster. If used in a tight loop, a 5* speedup might well matter. We don't know the context (and given the misleading statements about "two halves" (cf. the "O(log n)" comment and about "the universe's biggest numbers"), I doubt the OP would even recognize the context if it stared at him). So, sorry, no WTF as posted!
Sure, it might be written somewhat more readable using hex constants and, dare I say, even ?:, but the only real WTF in this posting is the inconsistent markup of return statements.
Admin
8 space tabs? Burn the heretic!
Admin
You could probably do a binary search loop if you really want, something like this:
Admin
I'm pretty sure this very function is discussed in Code Complete, with a much cleaner ifbased solution that doesn't burn as many cycles as the while solution. Having log2(0)=0 makes sense for some common uses of this function (for instance, rounding values up to the nearest power of 2).
@KCE: The problem is floatingpoint computation isn't necessary ideal; some combination of operations will generate notquiteexact values that round up to the correct value.
Admin
What rolls down stairs alone or in pairs, rolls over your neighbour's dog? What's great for a snack and fits on your back? It's Log, Log, Log!
It's Log, Log, it's big, it's heavy, it's wood. It's Log, Log, it's better than bad, it's good! Everyone wants a log! You're gonna love it, Log! Come on and get your log! Everyone needs a Log!
Admin
public static int z=0; public static double slog(double y) { if ( y <=1 ) { return z;} else { z=z+1; return slog(y/2);} }
Does the same thing, short/sweet, blahblahblah. Still no idea why you'd want this function. I ran this using 8192 and got the expected 13. I then ran this using 8194 and got 14, as the WTF in this article would do, but I still don't get it. log_2(8194) will be 13.000352177480302, and so to get a return of 14 is just wrong.
Anyone think of a use for this, or is it maybe an attempt at log(x)/log(2)??
Addendum 20160726 11:32:
WTF with the format??
Addendum 20160726 11:33: Aahhh.... I didn't use CODE tags. WTF??
Admin
Ones does not simply bit shift. Seriously, don't. Not unless you really need to. And you definitely don't if you are just doing integer division by 2.
Replacing division by bit shift manually is smart, but it isn't wise. It breaks the very second should the endianess change. Or someone decides to change the data type. May have worked for an integer, but breaks horribly when applied to a double.
Leave the smart optimizations to the compiler, and stick to writing semantically sensible code.
Admin
methinks YOU are the WTF in that particular case. If x is a float, even if x is supposed to be 2^n, log2(x) will only work out to an integer if the guts of log2() checks for such things.
Admin
very much this. GG is complaining about 3 vs. 2.9999999, and here we get log2(14) is equal to log2(16) . oh boy.
Admin
For problems like this, the shift operator is your friend. Its pals are the bitwise 'and' and bitwise 'or'.
Note to original programmer: You need a couple of clues to do this "properly". Please obtain such items.
Of course you could look it up in a table. I worked on some 8 bit microprocessor that did exactly that!
Admin
Xackly!!
Admin
One does not simply bit shift their way into Mordor. We need a plan!
Admin
<= becomes CMP which is a subtraction, does it not?
I wonder if it was still faster to replace it with & after one x = x  1, e.g. x <= 65536, becomes (x & 0xFFFF0000) == 0
Admin
IIRC The original DOOM had lookup tables for various functions.
Admin
@Pietro Gagliardi (andlabs), I knew it looked familiar.  https://books.google.com/books?id=LpVCAwAAQBAJ&lpg=PR1&dq=code%20complete&pg=PA633#v=onepage&q&f=false
Unfortunately the refactored version  which we see in the story  is on page 634 which is not available for preview. It's fantastic that we see an example from a book on clean coding as a feature on TDWTF!
Admin
fing forum
Admin
@Harby, Please describe this lookup table function you speak of. I am intrigued how different it might look.
Admin
Nice... Your code elegantly describes & implements the algorithm used by the author of the ifthenelsetree (which is at OK & ¬WTF BTW).
Admin
I wasn't saying it was sensible.
I was saying that it wasn't horribly slow.
Addendum 20160726 14:00: I was doing it in comparison to stringifying or some other thing, because from the context that sounded like what the original suggestion was.
Admin
int intLn2(int val) { if (val < 1) throw "Bad input value!"; int rv = 0; if (val & 0xffff0000) { rv += 16; val >>= 16; } if (val & 0xff00) { rv += 8; val >>= 8; } if (val & 0xf0) { rv += 4; val >>= 4; } if (val & 0xc) { rv += 2; val >>= 2; } if (val & 0x2) { rv += 1; val >>= 1; } return rv; }
Admin
"Replacing division by bit shift manually is smart, but it isn't wise. It breaks the very second should the endianess change."
LOL, I love it when the people who have no clue try to sound authoritative. Arithmetic operations occur in the registers which have no endianness, that's only a concern when you load and store.
Admin
Geez, guys, none of you have ever heard of the the BSR/BFFFO instruction that was added in i386/68020, to make this a oneinstruction operation? Every other CPU instruction set has the equivalent inverted operation, meaning you can subtract it from the operand size for a twoinstruction result. No matter what, it's always going to be faster than repeatedly testing and setting values.
Arguing over whether a loop or a binary tree is more efficient in a tight inner loop is rather missing the forest for the trees. A real programmer would work out how to generate the one or two op function instead of creating this elegant but slow mess.
Admin
Somebody's had SingleEntrySingleExit beaten into them too hard. If you really really really want to do this it's the perfect case for multiple exits: if a<1 return 0 if a<2 return 1 if a<4 return 2 if a<8 return 3 if a<16 return 4 etc.
Admin
This isn't theoretical math, it's inside a computer, where floatingpoint division is never going to give you the right answer.
Admin
If people are going on about instruction counts, then the x86/x64 architecture already has a single instruction that does the heavy lifting for you. It's called "Bit Scan Reverse" (BSR).
http://x86.renejeschke.de/html/file_module_x86_id_20.html
ARM has a similar instruction and GCC has appropriate intrinsics.
Otherwise, aside from the structure, a branching if statement is pretty much the most efficient thing you can do on a 32 bit input. I do mostly agree with Jonathan Harston though  SESE is the wrong answer here and an unrolled loop of returns starting from the biggest number down is best (assuming uniform distribution of inputs).
Doh  foxyshadis beat me to it.
Admin
I'm well aware of that. No log function is going to give an exact answer due to floating point error and approximation. My comment was to state that computing a log base 2 by taking a ln and then dividing by a constant is not worse than the code in the article. In fact it's probably better if they wanted a floating point value return instead of an integer. And both floating point error and approximation are most likely the reason that the original comment I responded to gave a result of 2.9999999999 for log base 2 of 8.
Admin
Bit Twiddling Hacks has a section on how to find base 2 logarithms. It starts with the obvious bitshifting while loop and gets more interesting from there. My favorite is the De Bruijn sequence lookup table. https://graphics.stanford.edu/~seander/bithacks.html#IntegerLogObvious
Admin
My guess: they ran statistics on what values actually occurred and optimized for common cases. Yes, this would need comments.
Admin
The result of bit shift changes when endianness changes? Who comes up with nonsense like that? Seriously, that's the biggest nonsense that I've ever heard. Bit shifts are defined in terms of values., not in terms of the representation. 100000 >> 7 for example will give exactly the same result, whatever the endianness of the processor is.
Admin
int log2_integral(int n) { union {double d; unsigned char c[8];} u; u.d=n; return (u.c[6]>>4)+(u.c[7]<<4)1023; }
Admin
And then you'd have at most 32 steps, with an average of 16 steps. More readable? Yes. More efficient? Not at all.
Admin
Now, that is endiannessdependent (and wrong for denormals), thanks for demonstrating.
Admin
One can always blend the approaches.
int i = 0; while(x >= 256) { x /= 256; i += 8; } if (x >= 16) { if (x >= 64) { return i + 6 + (x >= 128); } else { return i + 4 + (x >= 32); } } else if (x >= 4) { return i + 2 + (x >= 8); } else { return i + (x >= 2); } return i;
This uses 48 comparisons. If the numbers are spread evenly over the range, the average performance will be worse, but if the numbers are sufficiently skewed to the low end, the average performance could be better.
This is one of those functions that is simple enough having a standard cache for the values doesn't make sense; the hash computation will probably be more expensive than the function. But if its performance has a serious impact on overall performance, it's likely one or more of the algorithms using the log_2 routine so much could be modified to store log_2 values in a local variable somewhere to save some of those recalculations, which probably would have more of an impact than whether this routine does an average of 30 comparisons or 6.
Admin
public static int Log2(int x) { return Math.Ceiling(Math.Log(x, 2)); }