- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
I love your poetry !
The first paragraph is incredible !!!
Admin
I've seen even better test for zero IP address in some production code:
bool is_addr_null = true; if(media_addr) is_addr_null = !strcmp(inet_ntoa(media_addr->sock_addr.sin_addr), "0.0.0.0");
Admin
Admin
Oh, the joys of 18-bit shorts, 60-bit ints, 6-bit bytes, and ones-complement math. And no control characters. There was a very good reason Pascal had a function to tell if a text file was at the end of a line or not. I'm going into the corner to have the shakes for a while now.
Admin
Admin
Oh, but no! In your endless arrogance you assume ntohl() changes only endianness, which is not a truly multiplatform thinking. Imagine the standard of network communication changes some day and all IP addresses need to be XORed against 0xCAFEB1BA. Now ntohl() changes more than endianness, and you will be still able to determine HOW the IP differed from 0.0.0.0 from the OR pattern, when it did.
--dubya ^^^^^^captcha
Admin
[quote user="Bill"][quote user="fenned"]A 'long' is always 32 bits; [/quote] Sigh - try again. An INT is always the native architecture size of the processor. A long is always the longest int that can be represented on the architecture... Where did you see what you said in a standard C reference. Now I must admit that most architectures got this WRONG in the 64 bit transition for compatibility reasons... But in reality a long can be 64, 128, or even 256 bits on future architectures.[/quote]
an INT is not always the native architecture size. In SDCC, for example, it is 16 bits even though the target architecture is 8 bits.
[quote user="K&R page 36"] The intent is that short and long should provide different lengths of integers where practical; int will normally be the natural size for a particualar machine. short is often 16 bits, long 32 bits, and int either 16 or 32 bits. Each compiler is free to choose appropriate sizes for its own hardware, subject to the restriction that shorts and ints are at least 16 bits, longs are at least 32 bits, and short is no longer than int, which is no longer than long. [/qoute]
Admin
Fixed! Everyone knows that IsIpAddressZero() should only return true if you have an odd number of 1s.
Admin
I don't think Microsoft has read this spec.
FILE *infile; unsigned long fp; fp = ftell(infile);
fp will be garbage when you're past +2147483647 bytes into the file.
Sigh.
captcha: craaazy
Admin
There's also no requirement that "long" be the longest type available, and there's many systems where it isn't (e.g. Win64). The standard only says long is at least 32 bits; if you want to be sure you get a 64-bit (or larger) int, use long long.
(Or one can use the various types specified in <stdint.h>; at worst the compile will fail if you ask for a 32-bit type and the machine only has 48- and 96-bit types.)
You guys need to stop commenting on others' WTFs until you stop making them yourselves.
Admin
Just had to say this... X86 WEENIE!
On the last 3 machines that I used (SGI Origin200, DEC AlphaStation 500/266, and Sun Blade 150), sizeof(int) == 8. Granted, these aren't machines you're likely to find on your desk.
And, that long will fit nicely into any one of the 64-bit registers....
Admin
And for the three people left on the planet who don't already know...
This is why anyone even thinking about portability always has a header file with something like "typedef <X> int32;" where <X> is replaced by the platform and compiler specific data type for a signed 32 bit integer, and a similar line for every other useful fixed-width integer type. ;)
Also, we put network addresses into a NetworkAddress class (and not an "integer" of any size) because rediculous assumptions such as "an IP address fits into a 32-bit unsigned integer" (the idiot didn't even use an unsigned LONG) have already been revealed as nonsense even on a modern 64-bit x86 machine, if you're using IPv6.
It's been years since "optimisations" like putting an IP address into an integer has been a useful idea.
Admin
It should be obvious that doing a lot of operations instead of one, atomic operation (==) degrades thread safety.
If a comment sounds like nonsense, it's probably a joke!
Admin
Admin
Let's go increasingly pedantic... technically the signed integers are not required to hold -2^(bits-1) (the -2147483648 value). This allows a sign and magnitude implementation for instance, which can only hold -2^(bits-1)-1 to 2^(bits-1)-1. (In other words, a signed char is only guaranteed to be able to hold -127 to 127, an int -32767 to 32767, etc.)
This also means that the only value a bit field "int bf:1" is guaranteed to be able to hold is 0.
Admin
Did anyone notice that Win32 BOOL is not C++ bool?
bool is 1 byte (8 bits) on most platforms. BOOL is defined as an unsigned int and therefore may be 32 (x86) or 64 bits (x64). There's no bool in plain C. I think there is a reason for this anyway, because this way kernel code branching on BOOLs can use "test %reg, %reg; jz _foo_is_zero".
It's worse that one of their BOOL-valued functions returns TRUE, FALSE, or (BOOL) -1 [this means 0xffffffff)... This was mentioned somewhere on the forum.
Admin
No no NO!
All the standard says is that: A char must be at least 8 bits. sizeof(char) == 1 sizeof(char) <= sizeof(short) <= sizeof(int) <= sizeof(long)
Captcha: burned
Admin
Bzzzt! Wrong! Sigh... try again.
I think the real WTF are the daily battles to figure out who's king of the nerds in an internet bug forum. You wouldn't talk to your coworkers like that, would you?
Admin
longs are at least 32 bits, ints are the word length of a CPU. On some 64 bit systems an int is bigger than a long...
Addendum (2007-03-19 22:06): althought those systems are not exactly standards compliant...
Admin
You got it wrong. It seems to be checking if any one of the quads is non-zero.
This code reads on par with brainfuck.
Admin
Hmm, really? Isn't just the accumulator in the MAC 40 bits? So basically it's still 32 bit arithmetic, just that you get more than one carry bit when adding in the MAC.
Admin
BOOL IsIpAddressZero( LONG lIpAddress ){ return lIpAddress; }
Addendum (2007-03-20 00:56): I just realized that I forgot to negate it. i.e
BOOL IsIpAddressZero( LONG lIpAddress ){ return !(lIpAddress); }
Admin
As usual, the real WTF is all the ignorant comments from people with poor C skills. To summarize:
The main WTF is that the guy used a complicated expression when he could simply have tested against zero.
A secondary WTF is that the function return value is surprising given the function name.
There are a couple of minor problems, according to Standard C, which I wouldn't call WTFs because I don't actually know of any systems where they would arise: bit manipulation on signed values could cause a trap, and right-shifting a signed value is non-portable.
Regarding the sizes of types. The guarantees are:
You can deduce from the above that int must have at least 16 bits, and long must have at least 32 bits. There is no requirement that 'int' be a machine word, nor that 'long' a maximum of anything. In fact I have used compilers where 'int' was not a machine word and a 'long' could not even be held in the CPU (it was an 8-bit CPU).
C90 does not have 'bool'. It was added in C99 but one cannot assume that the posted code is C99 compliant, and even if it were, one cannot assume that BOOL is a define/typedef for bool. In my experience BOOL is usually a typedef for int or char.
Writing "foo?0:1" as someone suggested, is inane. Either !foo or foo != 0 is far better.
Testing anything against TRUE is inane, so it isn't a WTF if this function has trouble with it.
Regarding 'ntohl', it is a POSIX thing so I cannot authoritatively comment on it, but apparently it takes a 32-bit unsigned value as argument, so if LONG is more than 32 bits then the extra bits will just get truncated and there is no problem. I fail to see how an IPv4 address would use more than 32 bits anyway.
To the poster who suggested that ntohl may do more than just re-order the bits: if it turns out that that is possible, then it still does not justify the OP approach. The code would simply become ( ntohl(foo) != 0 ) or similar.
Admin
You call this reversed logic????
Pretty funny, because when I started with C (before that I worked as Java developer for years), I was always in a struggle with this C "logic", ie. if some C programmer writes a function
he surely will return a non 0 value if a file is hidden so that the caller needs to use something likeThis just isn't readable and definitely what I call reverse logic! I know this goes for every C standard library function and maybe that's one of the reasons why C just isn't my favourite language. :-)
Admin
how does 'if not hidden, do something' confuse you or even begin to qualify as "reverse logic?"
seeing as how logical operations work the exact same in java as they do in c, i'm really curious as to how you had issues with c logic upon switching
Admin
Eh, the reason 1.2.3.4 is sometimes called a "dotted quad" is not because it has dots and quads. Quad is shorthand for quadruple, aka a 4-tuple (of octets in this case).
Also, the difference between | and || would not be the difference between buggy and correct code, as you seem to imply.
Admin
IP addresses don't fit in an integer. First, to be pedantic, IP addresses are (ok, can be) 128 bits. Second, nothing guarantees an integer is (as you intend) at least 32 bits. It might be 16 or even 8 bits, or 9 or 18. A "long" is a better bet but still no guarantee. Try int32_t, or better, struct sockaddr.
Admin
I don't know any modern systems for which this is true, but fact of the matter is that you have no guarentee that a long of 0x00000000 is equal to zero.
It might sound crazy, but there have been systems for which these two are not equal.
Some old architectures used the sign bit differently, inverting it (where 0x1nnnnnn is a positive number) or just handling zero differently (where both 0x1000000 and 0x00000000 are zero or 0x10000000 is zero and 0x00000000 is 1).
Obviously, a comparison of ip == 0x0 would have sufficed either way, but ip == 0 isn't necessarily identical to the function given for all processor architectures.
Admin
However, it's more a case of me not wanting to go through the revisions and find out who added these lines, as you have to manually review the diffs for each revision and I didn't want to spend the time. I asked the others, and nobody admitted to it.
I guess is that whoever wrote it had considered how they would test an IP address for a specific (non-zero) value, and went on from there, ie: they started by facing in the wrong direction, then moved forward.
The code is used on Win32, so LONG means a 32-bit signed integer. For those that don't know, on Win32, BOOL is also a 32-bit integer, although it's unsigned (and yes, you can have a BOOL of 2). I would also very much like to mention that any network related programming on Win32 really sucks. I only did it for the money!
When I found the function, I clearly remember just stopping and staring, and thinking "No... that just can't be!"
Anyway, here's my list (ignoring that the entire function is completely unnecessary):
Like I said (and others have already posted), you can compare an IP address against zero inline directly.
I liked akatherder's IPv6 implementation. Very fitting. :-)
Another WTF I found in a different module was the use of signed 32-bit integers for keeping track of file sizes (in bytes). As we know, files can get smaller than zero, and never bigger than 2G. Somebody had fixed this by then redefining it to be the number of K needed for the file, but still in a signed int. Aaaarrrgh! It's now __uint64.
-- Steve
PS. Hi SKU!
Admin
The only horror here is the amount of people who have proved themselves totally incapable of reading and understanding a simple piece of code like that. I mean really, WTF!?
Admin
Actually, in this case, it was not Starteam's fault. IIRC, the code was inherited from a product that was developed ages ago by a different company that used a completely different Source code control system. This company's source code was then acquired as part of a merger or sometzhing similar. So the history of this function was lost when back then the newly acquired source code was checked in into Starteam in its latest incarnation only, without importing the history of all files because it was an incompatible format or the necessity to have the history was not considered important enough.
-- SKU
Admin
Admin
Admin
No it can't. It can be 16 bits or 18 bits, but a standards-conforming implementation can not have 8 or 9 bit integers. As has been said a number of times in this thread, an int must be able to hold the values -32,767 to +32767, a total of 65,535 values. In order to unambiguously distinguish between that many values, you NEED at least 16 bits. (Pigeon hole principle.)
The only way to get around this is if your bits aren't really bits and can hold more than two values. If you are using a ternary computer, you would need at least 11 trits. (10 falls just shy, at being able to distinguish 59,049 pieces of information.) Only if you have a base-4 computer (or higher) could you get away with only 8 "bits" of information.
Addendum (2007-03-20 12:01): I guess you could make the argument that you can't rely on being on a non-conforming platform, and thus you can't rely on the limits defined in the C standard. But at this point in time, nearly 20 years after C was standardized, I would argue that if that's the case you're not programming in C. (Or rather that your compiler isn't a C compiler.) This assertion is probably subject to disagreement.
Admin
Admin
Then that compiler is wrong (or at least not compliant). sizeof(char)==1, always, because sizeof measures the size of the type in chars (which is by definition the smallest addressable unit). Now I don't have any experience with Cray, but if all standard primitives are 64 bits, including chars, then sizeof(primitive)==1 and CHAR_BIT==64.
Admin
Funny, the SGIs and alphas I have around have sizeof(int)=4 and sizeof(long)=8. Think about it for a minute, would you use a C compiler for which you wouldn't have at least one 8bits, one 16bits and one 32bits type? If int is 64bits, you're missing 16 or 32.
OG.
Admin
Unfortunatelly, that design pattern has many shortcomings, the most pronounced of which is that some time later down the road, the code will look just as obfuscated to the original author as to any other person.
The only job protector that works -- sometimes! In some companies it is a job killer ... but you wouldn't want to work there anyway -- is the Common Sense design pattern.
Admin
No, && is the logical, & the binary and operator.
0xDEADBEEF & 0xFF == 0xEF, but 0xDEADBEEF && 0xFF == any non-zero value
Here's an extreme example: 1 && 2 != 0, but 1 & 2 == 0
Admin
Ok, actually this should be written as "(1 && 2) != 0, but (1 & 2) == 0" because of C's operator precedence rules.
Admin
That's crazy all right, since 0x00000000 is just another way to spell 0. It does not mean "all bits zero". (The same goes for (void*)0x00000000, which some people think means an all-bits-zero pointer. It doesn't, but it does mean a null pointer just like (void*)0.)
Wow. Weird. Standard C says an integer with all bits zero is 0 though, and this function is a Standard C function since it uses a prototype.
Even so, one WTF here is that the argument should have been unsigned, which would avoid problems like LONG_MIN == -(2**31 - 1).
OTOH, shifting the return from ntohl() is not a WTF, like someone suggested, since the return value is unsigned regardless of the signedness of the input value.
Admin
Allows it for two's complement too: (sign bit 1, all others 0) can be a trap representation. I've seen mention of such a machine once.
Yup. Which is one reason one should use unsigned int:1.
Admin
I'm not sure what you're getting at here. The thing I said that was wrong was that you're only guaranteed that sizeof(int) <= sizeof(long). You're also guaranteed that int is at least 16 bits and long is at least 32 bits.
It is not true that int is either 16 or 32 bits. It could also be 20 bits (PDP-11?), or some other esoteric number -- it just has to be able to store at least -16383 to 16383.
It is also not true that long is exactly 32 bits, or that it's either 32 or 64 bits.
Admin
Admin
Actually it is implementation-defined as to what happens to the higher bits when you right-shift a signed integer. In fact, on x86 you will find that it gets left-filled with 1 bits. The 0xFF mask is used to mask off all those bits.
Admin
Anyway, it equals 1, and does not equal any other value. if ( (0xDEADBEEF && 0xFF) != 1 ) puts("Broken compiler");
Addendum (2007-03-21 22:56): By 'higher' I mean 'lower', i.e. == binds more tightly than &&
Admin
Admin
BOOL (uppercase) is defined by the Windows API as a typedef to some sort of integer. bool (lowercase) is defined by C++ and can accept only true or false.
Admin
Oops, sorry, I meant to quote another post.
Admin
According to C11 5.2.4.2.1, INT_MIN may be as high as -32767 and INT_MAX may be as low as 32767... Please explain how one might portably fit a 32-bit integer into 16 bits?