• P (unregistered)

    #define DAT_FRIST_ID_OFFSET 0x00000001

  • James (unregistered)

    I would like to know more here. Embedded systems programing is not like writing C for your PC. There might be good reads want to ensure the layout of these structs remains fixed. For example maybe there was a requirement the software be upgrade-able without wiping the current data? Given this stuff is being written to flash, it should be thought of as a binary file-format. You can't have the compiler deciding to pack the structs differently after code change, you'll corrupt the data.

    Sure there are better ways than reading/writing structs out to permanent memory; but they all involve some number of cycle shuffling stuff around in memory/doing multiple reads/offset calculations etc. Knowing everything is fixed size and fix layout and just being able to copy it all as one big chuck the address of some working storage struct is fast and easy. That might be important on a busy micro-controller with little head room.

    Now its also true there are usually better ways to convince the compiler to keep the structs packed a certain way #pragma this or that... but again embedded isn't like working on a PC, for all we know this is some custom controller specific compiler that does not or did not when the code was written support that kind of thing and this was a sensible work around. I am not convinced this is a WTF so much as just old code that did not enjoy the benefit of modern abstractions.

  • bvs23bkv33 (unregistered)

    COBOL is The Father of all structs

  • Anon (unregistered)

    Sounds like someone just not trusting the tools to do the right thing and trying to reinvent the optimisation in the code. Something I have come across before.

    Of course, running something in a very restricted system where every ounce of space or speed need to considered, they might be right to not trust the tools implicitly. But this is something that could have been tested out (and of course, by not trusting the compiler to optimise they have ended up using up premium space instead).

  • (nodebb)

    Do not meddle in the affairs of compilers, for they are subtle and quick to anger.

  • jiigrec (unregistered) in reply to James

    Author here. Well, actually, no requirements for keeping any data : the controller acts like a bridge, and any database update wipes pre-existing data by design.

    Being a bridge, the CPU never goes over 10% utilization, so again, no need for any specific optimizations.

    The worst part is that it is not old code. The product is new, the code was written two years ago at most. My take is that it was written by a junior without any considerations for future maintenance or readability.

  • (nodebb)

    I'm not really fussed about the optimisation that Remy points out. It does save a small amount of time per comparison, as we can remove an ALU operation to calculate the offset, but the penalty in readability is fairly high.(1) A friendly "offsetof" macro to calculate the raw offset would be a good idea, rather than hand calculating the offset, but OK.

    What does make me wonder, though, is the call to xSemaphoreGive(DAT_tMutexDatabase);. This bothers me, because it's indicative of a confusion somewhere. Is it a semaphore (as indicated by the function name) or a mutex (as indicated by the variable name)?

    (1) On some architectures this will make essentially no difference, but on others (typically older ones), the readable version may even cause extra instruction fetches if the ISA lacks adequate addressing modes. x86-32 has all the addressing modes you'd need for this to work correctly, but I don't know whether there's an extra cost for using the "adding" modes (example: CMP ax, [ebx+8]) compared to the "plain" mode (example: CMP ax, [ebx])(2)

    (2) Intel-style mnemonics because I hate GNU style with a passion that burns with the fire of a small match.

  • ooOOooGa (unregistered) in reply to James

    The compiler shouldn't be changing the memory layout of the structs just because of a code change. Unless the code change was actually changing the elements of the struct. And if you think that changing the struct definition is not going to change the struct memory layout, then I don't know what to tell you.

    The struct memory layout can change because of a compiler change. Either an update to the compiler or a new compiler vendor entirely. But changing the compiler is usually a huge and critical process in a lot of applications for that very reason. The new compiler has to be tested, verified, and validated to be sure that it is compatible with the previous data that the application has already written.

    That is no excuse for writing a monstrosity like is in this article.

  • Angela Anuszewski (google) in reply to James

    gcc is perfectly capable of doing #pragma pack successfully.

  • I Saw a Robot (unregistered) in reply to Steve_The_Cynic

    You mean AT&T syntax?

  • I Saw a Robot (unregistered)

    Typically, the reason for these weird structure constructs is to mesh with the coprocessors' register memory formats. Look at the Renesas RZ/A datasheet, for example, and you see tons of 32-bit registers that only define 8 bits of data. Sometimes this is for expandability, other times simply for simplicity in decoding addresses or the like. But even that can be done with careful struct definitions (as long as all the registers in question are consecutive). This seems to be an example of the "programmer" not trusting the compiler, or not understanding pointers.

  • (nodebb) in reply to I Saw a Robot

    You mean AT&T syntax?

    The style I used is what Intel / Microsoft / Borland used for x86. I don't know the history of the ASM style used by GCC / GAS, so maybe you're right.

  • sizer99 (google)

    Since nobody's said the magic words yet, this is a very obvious case of Premature Optimization.

    I've got a very similar requirement in one of my embedded products where we have to handle about 40 messages. I just use a const array of perfectly vanilla structs, let the compiler do everything and just check against msgInfo[idx].msgId. I use a binary search just because I have a toolbox generic I wrote years ago that works, but I'll happily admit the performance was just fine with a simple linear search even on this dinky SoC.

    My using a binary search has one downside - the message table in flash (therefore in code) must be kept in order (enforced with a check on startup). This WTF has far too many downsides for no real upside.

  • (nodebb)

    While looking really quite different, that code reminds me a lot of how some of our embedded C used to be. Someone long ago had decided to not trust the compiler to lay out structs consistently (no idea why) and my coworkers had copied that style of programming all over our codebase, resulting in insanely complex code given the simple task it was doing. (Their favoured technique was the big enum of offsets, effectively giving the same information as you'd get with a struct but in way more headache-inducing form!) Removing that shit from 90% of our code was easy; the other 10% was written by a demented jerk who liked going back and forth between doing address arithmetic on bytes, words and half-words (it was deserializing data from a network protocol, and so was particularly important to get right). I hate dealing with code written by hardware designers!

    Keeping everything laid out on 4-byte boundaries is very much not a WTF in embedded code. If you're using an ARM, for example, you pay a considerable cost for non-aligned accesses (especially for writes!) and you'll want to avoid that where you don't strictly need to save the memory. The best memory saving technique is generally to put sub-word-size fields next to each other, particularly trying to get fields of the same type together. That lets you pack fairly closely without paying the non-aligned access cost too often.

  • (nodebb)

    TRWTF here is the use of Hungarian. This causes the compiler psychological trauma, preventing it from being able to concentrate on producing efficient code. The hand optimisations are probably an attempt to compensate for this.

  • I dunno LOL ¯\(°_o)/¯ (unregistered)

    In this case, I hope that the Hungarian was more for the benefit of the TDWTF reader than having been in the original source. Of course, having done embedded myself, I know it was very likely done that way.

    As for saving cycles with alignment, sure, if it's something being accessed frequently and does a lot of work, like in a DSP algorithm. If there's plenty of unused CPU power (as is increasingly likely with modern embedded CPUs), it's useless to save a few cycles that you don't need, while wasting the much more valuable kind of cycles, that of the poor schmuck who has to maintain the code later.

  • WTF (unregistered)

    " pretty much any C compiler is going to compile the -> operator into a static memory offset anyway"

    You wish. gcc never did that for any of our embedded targets. Something about register allocation being too difficult for the register allocation optimizer.

  • (nodebb) in reply to WTF

    gcc never did that for any of our embedded targets

    It depends on whether you've got static offset support (of the right size) in the ISA. An embedded target might not have that.

  • DCL (unregistered) in reply to bvs23bkv33

    PL/1 is even better.

  • Ricardo Rossello (unregistered)

    Caiganle encima a esa puta!

Leave a comment on “Structured Searching”

Log In or post as a guest

Replying to comment #506804:

« Return to Article