• (nodebb)

At least it wasn't:

``````#define ZERO    1
#define ONE     0
``````
• Prime Mover (unregistered)

So TRWTF is that it really ought to be:

```#define ZERO    0
#define ONE     1
#define TWO     2
#define THREE   3
#define FOUR    4
#define FIVE    5
#define BIT0    (ONE << ZERO)
#define BIT1    (ONE << ONE)
#define BIT2    (ONE << TWO)
#define BIT3    (ONE << THREE)
#define BIT4    (ONE << FOUR)
#define BIT5    (ONE << FIVE)
```

... yes?

• Sole Purpose Of Visit (unregistered) in reply to Prime Mover

Slight problem with endian-ness there?

• (nodebb) in reply to Prime Mover
```#define ZERO    0
#define ONE     1
#define TWO     ONE+ONE
#define THREE   TWO+ONE
#define FOUR    TWO+TWO
#define FIVE    THREE+TWO
#define BIT0    (ONE << ZERO)
#define BIT1    (ONE << ONE)
#define BIT2    (ONE << TWO)
#define BIT3    (ONE << THREE)
#define BIT4    (ONE << FOUR)
#define BIT5    (ONE << FIVE)
```

There, fixed it for you.

• (nodebb)

All of this leaves us with one more question: why on Earth is size a bitmask?

`1 << 3` is 8 - the number of bits in a byte and `1 << 5` is 32. the number of bits in an `int` (in LP64) and on 32 bit architectures.

Addendum 2021-11-29 07:39: I realise that doesn't answer the question but it does give us the logic of the specific values.

• (nodebb) in reply to Mr. TA

Still wrong

``````#define ZERO    0
#define ONE     1
#define TWO     (ONE << ONE)
#define THREE   ( (ONE << ONE)+ONE)
#define FOUR    (ONE <<  (ONE << ONE))
#define FIVE    ( ( (ONE << ONE)+ONE) +  (ONE << ONE))
#define BIT0    (ONE << ZERO)
#define BIT1    (ONE << ONE)
#define BIT2    (ONE << TWO)
#define BIT3    (ONE << THREE)
#define BIT4    (ONE << FOUR)
#define BIT5    (ONE << FIVE)

``````

I also contemplated defining a macro for the operation

``````#define ONE_LEFT_SHIFTED_ONE_TO_THE_POWER_OF(x)  (ONE << (x))
``````

to take the place of `<<` but I've got work to do this afternoon.

• MacFrog (unregistered) in reply to Jeremy Pereira

TRWTF is that size is not a bitmask. It would also be TRWTF if it were.

• (nodebb) in reply to MacFrog

True, except that the way they defined it makes it look like it's a bitmask, which is also a WTF.

• 516052 (unregistered)

I have seen stuff like this again and again. And honestly I think the intention is good but the implementation is severely misguided.

I mean, for a start who is supposed to use these constants? Even the original developer is likely to forget about them and others won't ever find out unless they know what to look for. That's no good.

Than we also have the issue of them being limited only to 1-5. What happens when you want to add 6 or 11 or 43? Do you dig through the source files to find the one or do you just define your new constant wherever? And of course that opens up the can of worms that is include order. What happens if you remember to include the file for 1-5 and 7-11 but not the one containing #define 6.

Than we have types to think about. Seriously, like what if you need your constant for the number 3 to be decimal or maybe you want your constant for 7 to only be 8 bits. Don't laugh! Memory saving is a thing you know.

Finally there is the issue of language. What if you are working on a team where not everyone knows the american word for what ever number you want to use?

No, what's clearly needed here is some sort of standardized centralized enterprising number generation system. Description in next post.

• 516052 (unregistered)

First, start with a NUMBER class. Class Number { int INT; float Float; double Double; ... ... }

This immediately gives us all the various formats without having to worry about nasty things like rounding and casting. Also, the values are not connected. So if you really want you can just go Number("Three").Int = 5 and that will work. Just in case you need to. And it won't break things for anyone else in your program like #Define Three 5 would!

Of course all this relies on the objects being actual objects and not some sort of ugly static construct so we need a factory class as well. Let's call it SpecialNumberFactory. SpecialNumberFactory has a method called GetNumber that takes a string. When called, it first checks to see if that string is in english and if not runs it through Yandex Translate (because google is an evil big corporation and not hipster enough) to convert it.

Than it connects to one of those online word to number converter sites and translates the number into a new Number object filled out to that number. That way each user can just write SpecialNumberFactory.GetNumber("Number") every time he would otherwise be forced to write an ugly magic number.

That's much cleaner.

• Prime Mover (unregistered) in reply to 516052
```SpecialNumberFactory.GetNumber("Euler-Mascheroni Constant")
```
• Sole Purpose Of Visit (unregistered) in reply to Jeremy Pereira

This is sooo much better.

And now we see the importance of using good ole C macros to define magic numbers in a lil'ole bitty environment!

• Duston (unregistered)

"That's much cleaner." You forgot to mention the XML configuration file to make it Enterprisey.

• 516052 (unregistered) in reply to Duston

XML... yea, we could do that. Good thinking. We don't want this to be nonconfigurable.

Hell, why stop there? We can completely localize everything by having the program instead call on a big centralized Excell database containing all the translations and numbers and even the word to number scripts! That way we don't have to rely on the internet and can also account for the possibility that the higher ups might want to redefine a word or number.

• Grzes (unregistered) in reply to Jeremy Pereira

You need parentheses.

#define ZERO 0 #define ONE 1 #define TWO ((ONE) << (ONE)) #define THREE ( ((ONE) << (ONE))+(ONE)) #define FOUR ((ONE) << ((ONE) << (ONE))) #define FIVE ( ( ((ONE) << (ONE))+(ONE)) + ((ONE) << (ONE))) #define BIT0 ((ONE) << (ZERO)) #define BIT1 ((ONE) << (ONE)) #define BIT2 ((ONE) << (TWO)) #define BIT3 ((ONE) << (THREE)) #define BIT4 ((ONE) << (FOUR)) #define BIT5 ((ONE) << (FIVE))

• Sou Eu (unregistered)

Instead of using constants named BIT0 ... BIT5, useful names should be used SIZE_BIT, SIZE_BYTE, SIZE_INT, etc. That still leaves me scratching my head of why not use the sizeof function unless the author wants the same size regardless of architecture used. I'm also curious about storing size in a char instead of an unsigned byte.

• Argle (unregistered)

Going back to 1979, I was a kid involved with a project done in ForTran IV. For good reason, there were global variables like ZERO, ONE, TWO, TEN, etc. If this is bewildering, keep in mind that ForTran IV passes all parameters as reference types. Thus, if you passed 2 to a function, the compiler created an unnamed variable and put the value 2 in it... which could be modified by the subroutine. (I sense more than a few people cringing over this.) But in the tight space in our hardware, we didn't have the luxury of letting the compiler do this every time you needed a common constant. So, lots of magic names and numbers. Such fun.

• (nodebb) in reply to Argle

Thus, if you passed 2 to a function, the compiler created an unnamed variable and put the value 2 in it...

In some systems, the Fortran compiler would create one unnamed variable (call it, say, 2 for 2) that was used for all uses of the constant 2, and, sadly, it was writeable(1), which had ... entertaining ... consequences.

(1) On Sun's compiler for Solaris, it wasn't writeable.

Addendum 2021-11-29 13:12: Ugh. THe bold italic 2 was meant to be triple-underscores before and after.

• Actually (unregistered) in reply to Mr. TA

You're all wrong:

``````#define ONE     1
#define ZERO    ONE-ONE
#define TWO     ONE+ONE
#define THREE   TWO+ONE
#define FOUR    TWO+TWO
#define FIVE    THREE+TWO
#define BIT0    (ONE << ZERO)
#define BIT1    (ONE << ONE)
#define BIT2    (ONE << TWO)
#define BIT3    (ONE << THREE)
#define BIT4    (ONE << FOUR)
#define BIT5    (ONE << FIVE)
``````

There can be only one magic number.

• Argle (unregistered) in reply to Steve_The_Cynic

Heheh, I was almost going to remark on those consequences of being writable, but it seems to be about as quaint as talking about automobiles from the 30s where you had a choke and a couple extra pedals I don't recognize. As well a steering column that wasn't collapsible aimed right at your heart.

• Sole Purpose Of Visit (unregistered) in reply to Actually

Still not magic enough. Trivially, you are missing the parentheses necessary for "safe C macros." But also, why not leverage Scheme?

#define ZERO () #define ONE (()) #define TWO (cons ONE ONE) #define THREE (cons TWO ONE) #define FOUR (cons THREE ONE) #define FIVE (cons FOUR ONE) #define (pow-tr2 a b) (let pow-tr2-h [(b b) (result 1)] (if (= b 0) result (pow-tr2-h (- b 1) (* result a))))) #define SHIFT_1 (pow-tr2 ONE ONE) #define SHIFT_2 (pow-tr2 ONE TWO) #define SHIFT_3 (pow-tr2 ONE THREE) #define SHIFT_4 (pow-tr2 ONE FOUR) #define SHIFT_5 (pow-tr2 ONE FIVE) #define BIT0 ZERO /* Let's not try too hard */ #define BIT1 SHIFT_1 #define BIT2 SHIFT_2 #define BIT3 SHIFT_3 #define BIT4 SHIFT_5 #define BIT5 SHIFT_5

The SHIFT_x definitions are obviously there to make combinatorial bit-shifting a lot clearer to the reader.

All we need to do now is to write an intermediate Scheme REPL to sit in between the C preprocessor and the target machine, and we're done!

I believe this is the last word on the subject.

• Loren Pechtel (unregistered)

I think this is a case of legacy support. Given the function it seems to me like it's decoding a stream of some kind. Are we reading an 8-bit field or a 16-bit field? I suspect that size was packed along with something else into a byte but that has already been decoded before reaching this routine.

• NotThatInteresting (unregistered)

It does remind me of an industrial network protocol I once worked on - first 2 bytes defined the command and sub-command type and the number of bytes that contain the size of the data payload.

e.g. Byte 1 - [Bit 7-0 = Command], Byte 2 - [Bits - 7-3 = Sub Command, Bits 2-0 = Number of bytes (n) that define payload data], Byte 3-(n+3) = [little endian of data size]

That might explain the 'size'

• (nodebb) in reply to NotThatInteresting

I read that, and now my brain hurts. It seriously transmitted the size of the size?

• WTFGuy (unregistered)

I could certainly see "size of size" being a thing back in the days of 8-bit hardware with 8-bit math being the norm. And with some very simple industrial machines having payloads of a few bytes, or a few dozen, but definitely below 256 bytes, while other industrial machines would have whopping payloads of multiple K bytes.

Knowing whether to use 8-bit math, or 16- bit, or, gasp!, 32-bit emulated-in-software math, might well be the difference between keeping up with the data rate or not.

Id' bet that from the POV of any given industrial gizmo receiving commands or sending status data, its in- or out-bound payload size is nearly fixed and so its size-of-size is a constant. Which makes its internal programming easy. Conversely, for the more sophisticated devices monitoring the network and controlling the hundreds of gizmos in the factory, they'd have to deal with multiple length ranges coming from multiple types of gizmos.

Those controllers would also have much more powerful processors; maybe even 8080s or 6502s. Ye Olden Tymes were ... quaintly feeble.

• (nodebb) in reply to WTFGuy

Those controllers would also have much more powerful processors; maybe even 8080s or 6502s. Ye Olden Tymes were ... quaintly feeble.

For sure. I once had a colleague who evaluated (for work) a 4-bit Hitachi thing, what we'd now call a SoC, but four bits wide, while I busily chipped away at a solution for a different problem that used a Texas Instruments SSP (Sensor Signal Processor). It achieved the rather quaint goal of being the smallest memory device I ever worked with, using just 576 bits of internal RAM. (We had a 2048-byte serial EEPROM to store our program, but in theory you could store all your program on an internal masked ROM, and then you could play with the entirety of the 576 bits of RAM.)

This would have been around 1992, by the way.

Addendum 2021-11-30 09:37: Approximately this thing: http://www.farnell.com/datasheets/84067.pdf

• (nodebb) in reply to Prime Mover

SpecialNumberFactory.GetNumber("Chaitlin's Constant")

• Barf4Eva (unregistered) in reply to Actually

well done, highlander!

• LoganDark (unregistered) in reply to Steve_The_Cynic

Let's just agree that the entire thing is a WTF.

• Craig (unregistered) in reply to Steve_The_Cynic

Writeable literals in Fortran was still happening as recently as the Digital compiler for Windows in the mid-90s (which I know was fully F90 and I think supported F95); I had a fun bug to fix that arose from assigning a non-zero value over the literal 0.0. I think that family moved the literals into a read-only segment in the first Compaq-branded version of the compiler in the late 90's.

It should be noted that this was the Fortran equivalent of undefined behavior (an illegal program, but not one required to be caught by the compiler or runtime), and is only a realistic hazard with a "legacy" style of programming that doesn't use things like modules or interfaces or INTENT on arguments.

• Priyanka (unregistered)
Comment held for moderation.

https://taraclouds.com

TaraClouds is a professional platform where we provide informative content like Artificial Intelligence (AI) and other emerging Technologies it also reviews new emerging gadgets, and technology news. TaraClouds Founded in November 2021.

Today, the family of TaraClouds comprises thousands of direct visitors on different social platforms and has access to a network of thousands of users on Twitter, Facebook, Google, LinkedIn, Pinterest, Tumblr, medium, and different search engines.

Our mission at TaraClouds is to cover news with regard to Artificial intelligence, Blockchain, the Internet of Things (IoT), and other emerging technologies, and Technology in Pakistan and globally.

Visit Taraclouds.com