• Registered (unregistered)

    It looks to be old style Wild West code, in which everything which could be declared as 'int' and pass basic functionality was declared as 'int'. Mainly to save typing of the longer 'unsigned int' and minimize the need for casting.

  • nintendoeats (unregistered)

    In fairness, general good practice these days is not to use unsigned integers as a way of enforcing only positive sizes. Negative size bugs are obvious, uint overflow bugs are not.

  • (nodebb)

    In embedded systems, the shifting would be the only WTF because the environment is constrained enough that the size of unsigned int would be known. Platform constraints really are a thing there.

    In general, the width of unsigned int could be almost anything larger than a byte... but it probably isn't on any computer actively used in the last 20 years. The width of unsigned long is far less certain.

  • (nodebb)

    Which, if you spot that memcpy, they expect it to be 32 bits.

    This is probably true, even if they think of it as meaning "four bytes", except that in fact, both assumptions are (in the general case) incorrect. (CHAR_BITS is not obliged to be 8.)

    And the size in bits of int is not "at least 16 bits", but "at least as long as short but not longer than long". short is "at least 16 bits, but not shorter than char". char in turn, is "at least 8 bits".

    After all, the correct value to pass as the amount to copy might well be just 1, which is sizeof(unsigned int) on a machine where the "software addressing word"(1) is 32 bits. So it should be using uint32_t and sizeof(head)...

    (1) The size contained at one specific address as seen by software, 8 bits for 6502 or 6809 or x86 or x86-64 or ARM, 16 bits for Nord 100, 60 bits for CDC Cyber 815, etc. Almost everything uses 8 bits these days, but almost everything, not actually everything.

  • (nodebb) in reply to dkf

    In embedded systems, the shifting would be the only WTF because the environment is constrained enough that the size of unsigned int would be known.

    I don't think I agree. The size of int is known on any implementation and, if you don't happen to know it, it is trivial to find out: sizeof (int). I would argue that embedded systems are the one place where int might not be 32 bits (the last C compiler that I worked with that did not have a 32 bit int on a general purpose computer, was Megamax C on the Atari ST.

    On my Mac, htonl(3) is documented to return a uint32_t which is a type that has been around since at least C99. This is the type they should have used.

  • (nodebb)

    As to the embedded systems comments: If the code the vendor has supplied is meant to talk to their device, the code isn't running on that device. It's running on your device or system. If my 64-bit PC is talking to their 8-bit gizmo, the int that matters for that code is the one my C compiler uses on my PC. Now maaybe, depending on what exactly "device" means in this context we're just chips on the same mobo. But probably not. And even then, the bitness that matters is the processor my code is running on, not the bitness of theirs.

    As to Remy's closing comment:

    The rest of the code is of a similar quality, and it raises the question: if this is what they're sending to customers to show them how to use the devices, what kind of code is on that device?.

    My answer is: "The same as every other modern device: a buggy mess of stuff that works more by coincidence than by deliberate intent."

  • (nodebb)

    Actually, unsigned int by spec means it's bigger than a byte, which is the only data type defined in C specs by size (8 bits). So a 9bit unsigned int is actually a valid type, so is a 192bit one. An int should have the same size as a processor word, but that comes with it's own problems.

    In theory the spec was designed to allow C be ported to as many platforms as possible while leaving as much optimization room as possible over for the compiler to use. In practice it meant a lot developers went the easy route and didn't even bother implementing any safeguards if the architecture changes, result in technical debt in the moment a finger touched the keyboard. For that reason alone have modern languages fixed data type sizes and good languages have language supported for checked/unchecked integer operations to allow the compiler to generate the best code for the targeted platform.

    Addendum 2023-11-15 12:33: Obviously with byte I meant char.

  • Sauron (unregistered)
    unsigned int* wtf_pointer = NULL;
    
    // With the power of undefined behaviour, dereferencing a NULL pointer might actually make this comment become the frist.
    unsigned int wtf = *wtf_pointer;
    
  • Pabz (unregistered)

    It could be that the 0 << 24 was an indicator of some kind of flag that results in different behaviour if that bit is set to 1. If so, at the very least a comment documenting that would have been good though.

  • (author) in reply to WTFGuy

    So, I'm actually familiar with the device in question (anonymizing it because it's a niche industry and there aren't a lot of vendors), and I'm not sure it even works by coincidence. I'm not sure it works at all.

  • (nodebb) in reply to MaxiTB

    Actually, unsigned int by spec means it's bigger than a byte, which is the only data type defined in C specs by size (8 bits).

    That's not true, actually. It's perfectly conformant for all the integer types to be 137 bits in length, in which case sizeof(int) would be exactly 1.

    So a 9bit unsigned int is actually a valid type

    No, it isn't, because int must be at least as long as short, and short must be at least 16 bits.

  • (nodebb)

    you can't send a packet of -5 bytes

    That gives me an idea. Who needs the recv() call, just use send() with a negative length.

  • (nodebb) in reply to Pabz

    My thoughts exactly: It's an example of something that in some circumstances, is not zero.

  • (nodebb) in reply to Steve_The_Cynic

    Wait, short is defined as a 16 bit number? I'm not aware of that; I know that short must be the same size or bigger than int, I can't recall there to be any definition beside char being 8 bit.

    Addendum 2023-11-15 12:38: Nvm, C99 added the constraint. I always forget that this version was king of breaking changes.

  • (nodebb)

    There are 'experts' who insist that all sizes should be ints, not unsigned ints. Possibly because too many people have written loops that never terminate because 0 - 1 is not less than 0 (See https://github.com/isocpp/CppCoreGuidelines/blob/master/CppCoreGuidelines.md#Res-mix though I think the anchors may be broken - look for ES.100 )

  • (nodebb) in reply to thosrtanner

    They are called Java developers, because signed 8-bit ints make obviously more sense then unsigned ones.

    In all seriousness, there's an argument to be made about designing a processor supporting only signed 32 and 64 bit operations: That will greatly decrease implementation complexity, therefore making room for other stuff on the die and that reduced complexity makes scheduling, parallelization and other optimization strategies easier and cheaper to implement.

    However historically before 64 bit native processors were widespread unsigned types made a lot of sense. An integer value of 3 billion isn't that uncommon and there's a surprising high amount of cases where negative numbers make no sense, counts are good example.

  • asdf (unregistered)

    It's not that uncommon to see bit flags defined as zero shifted to some position, next to another defined as a one shifted into the same position. As mentioned in another comment, it serves a documentary purpose. What you don't see is those definitions written out inline.

  • Harrow (unregistered)

    ...breakpoints.

  • (nodebb) in reply to MaxiTB

    Wait, short is defined as a 16 bit number?

    At least 16-bit.

    I know that short must be the same size or bigger than int,

    You have that backwards. int must be the same size or bigger than short...

    I can't recall there to be any definition beside char being 8 bit.

    At least 8-bit.

  • (nodebb) in reply to MaxiTB

    In all seriousness, there's an argument to be made about designing a processor supporting only signed 32 and 64 bit operations

    One of the Cray machines did exactly that. It has no 16-bit integer type.

  • oni (unregistered)

    Incidentally, htonl is defined as uint32_t htonl(uint32_t hostlong); since at least POSIX.1-2008. Probably for good reason...

    I was wondering if there was another potential problem lurking here, in case a C compiler defines int as 64-bit. When the 32-bit return value from htonl is promoted to a 64-bit int, it will be zero-expanded. Subsequently copying only the 4 lowest-address bytes would basically copy 4 zero bytes, if the zero-expansion happened to be on the low end. Then again, this would only be an issue on a big endian system, because the byte swapping done by htonl on a little endian CPU happens before the promotion to 64 bits. So it wouldn't be an issue for most people, but it's still ignorant and careless.

  • (nodebb)

    This reminds me of the time some 5 years ago that I was working on an embedded device. The thing was a portable POS system - it had a card reader, GSM modem, receipt printer, numeric keypad, barcode reader and a (resistive) touchscreen all in one handheld brick-sized package. It was running Windows CE 5, I think. Did I say this was 2018? Yeah. Those things were BOUGHT obsolete. Probably because they were cheaper. Obviously, they were a nightmare to work with. There was .NET Framework Compact there (don't remember if 1 or 2) which we used to make our app.

    Some highlights:

    At one point we realized that .NET Compact no longer supports any ciphers that modern TLS implementations still accept, so HTTPS was no longer working. Luckily we were able to force the backend to use an outdated method. At least it's better than plain HTTP.

    In the app we added code that periodically checked and reconnected the GSM connection, because back then you still needed to deliberately CONNECT to the internet, and there was no automatic reconnection.

    I had to work with the built in barcode scanner and the thing was absolutely flaky. In the end I decompiled the supplied libraries and bypassed them entirely just to connect directly to the hardware. Luckily it communicated over a standard COM port so there wasn't any rocket science. It also helped a lot that the barcode module was manufactured by a separate company and had a decent documentation. In the end it still hiccupped occasionally, but much less than with the provider library.

  • (nodebb)

    The Harris Datacraft had 24 bit words. There were 5 Dec computers PDP-1 ... PDP-15 that were 18 bits.

  • the cow (not the robot) (unregistered) in reply to Barry Margolin

    That's how newC would probably become in Orwell's 1984...

  • Dan S (unregistered)

    I remember working with PRIMOS C, where pointers were 3 16-bit words. Good times lol.

  • Officer Johnny Holzkopf (unregistered) in reply to Pabz

    It is also possible that the "0<<24 | something" was someone's idea of a "format template" for a numerical value, i. e., 0 << 24 = 000000000000000000000000, that is 3 pieces of 00000000, and onto that, we write len + address_len, and another + 1 because somehow one thing at the end was always missing ... however, 24 isn't 32 (because memcpy()'s last parameter is 4), and 4 * 8 is ... never mind, management has signed it off as "good enough for our stupid customers" ...

Leave a comment on “Shift Sign Off”

Log In or post as a guest

Replying to comment #:

« Return to Article