• (cs)

    There's no way this is faster... somebody benchmark this please!

  • KiwiBlue (unregistered)

    I'll bet the author of this WTF has seen the proverbial template meta-program for calculating factorials.

  • th0mas (unregistered)

    It's a good thing we never have to multiply by a negative...

  • (cs)

    Holy moly!

    This is exactly the same speed as multiply at runtime, and probably slower at compile time.

  • (cs)

    Holy shit.  That's just wrong.  Oh so wrong.

    What kills me is that the guy was apparently smart enough to figure this out, but dumb enough not to realize that, not only is he not realizing any performance gains, he's exploding the compile time should he use this functional template a number of times.

  • snowyote (unregistered) in reply to loneprogrammer

    Well, if you have to do two multiplies, like this:


    typedef QuickMultiply<2,6> twelve;

    typedef QuickMultiply<2,7> fourteen;


    The first line will instantiate the following templates:

    QuickMultiply<2,6>

    QuickMultiply<2,5>

    QuickMultiply<2,4>

    QuickMultiply<2,3>

    QuickMultiply<2,2>

    QuickMultiply<2,1>


    But the second line will only have to instantiate:

    QuickMultiply<2,7>


    Because once it reduces it to <2,6> it finds that the template has already been instantiated and lives in typespace somewhere.  Maybe if this was being used for a ton of compile-time multiplies where the domain of the first argument was very limited... it's hard to tell from just this snippet if it's completely retarded to be using metaprogramming here, but it's even more hard to believe that it isn't.


    This is not how Boost does it, so I wouldn't recommend it, but if you look at Boost's times.hpp and how it implements arithmetic via template metaprogramming, it seems like much -more- of a WTF :)

  • (cs)

    snowyote: I think you're missing the point. The programmer took an overly-complicated route to speed up SIMPLE MULTIPLICATION. The program would need to be processing trillions of calculations over long periods of time to require a performance boost to simple math functions.

  • (cs) in reply to Manni

    Especially since you'd have to have quite the brain-dead compiler in order for it not to optimize constant multiplication away.

  • snowyote (unregistered) in reply to Manni

    nodnod Sort of missing the point on purpose, really - I don't think this is defensible (especially since the typespace pollution this yields will slow down all type lookups, bogging down compilation in general), but it's more engaging for me to try to figure out what someone might've been thinking when writing this code than just dismissing it out-of-hand.  That, and I have a habit of assuming when I read other people's code that the author is more clever than me :)

    It can be useful to templatize what would be otherwise compile-time operations like multiplication just so that you have a uniform interface if you're doing other metaprogramming... for example, you could have a partially specialized QuickMultiply which you could then use as a 1-ary functor and apply it to a sequence.  See http://www.boost.org/libs/mpl/doc/refmanual.html for examples.  Again, I don't believe that that's what's going on here, but it's a fun mental exercise.

  • JohnO (unregistered) in reply to Manni

    SnowyOte and Manni, you are both missing the point that this is happening at COMPILE time.  WTF #1: why would you employ this complex construct to improve COMPILE time performance of multiplication by small numbers?  WTF #2: if the numbers are small, just do the damn multiplacation in your head and write that directly into the code with a comment showing the two numbers that went into the result (assuming u need that).

  • (cs) in reply to JohnO

    The biggest WTF about this is that it's a code obfuscation that accomplishes less than nothing.  Something I see all the time.

  • Anonymous Pang Mao (unregistered) in reply to JohnO

    Ok, here's a more specific boost link:

    http://www.boost.org/libs/mpl/doc/refmanual/metafunctions.html

    Here's the relevant bit:

    "All other considerations aside, as of the time of this writing (early 2004), using built-in operators on integral constants still often present a portability problem — many compilers cannot handle particular forms of expressions, forcing us to use conditional compilation. Because MPL numeric metafunctions work on types and encapsulate these kind of workarounds internally, they elude these problems, so if you aim for portability, it is generally adviced to use them in the place of the conventional operators, even at the price of slightly decreased readability."

    So, theoretically, they could be trying to avoid conditional compilation to handle cross-platform issues, in lieu of just doing the multiplication in their heads... nevertheless I still think it's a WTF ;-)

  • (cs) in reply to Anonymous Pang Mao
    Anonymous:
    Ok, here's a more specific boost link: http://www.boost.org/libs/mpl/doc/refmanual/metafunctions.html Here's the relevant bit: "All other considerations aside, as of the time of this writing (early 2004), using built-in operators on integral constants still often present a portability problem — many compilers cannot handle particular forms of expressions, forcing us to use conditional compilation. Because MPL numeric metafunctions work on types and encapsulate these kind of workarounds internally, they elude these problems, so if you aim for portability, it is generally adviced to use them in the place of the conventional operators, even at the price of slightly decreased readability." So, theoretically, they could be trying to avoid conditional compilation to handle cross-platform issues, in lieu of just doing the multiplication in their heads... nevertheless I still think it's a WTF ;-)


    C++ templates form a turing-complete, compile-time language in their own right.
    This wasn't the intention of Bjarne Stroustrup.
    I think he deserves some special spot in the WTF Hall of Fame? for that, in a not-too-disrespectful way.
  • Matt B (unregistered) in reply to travisowens
    travisowens:
    There's no way this is faster... somebody benchmark this please!


    omg I think you unraveled the mystery of why this is posted on a site called "The Daily WHAT THE FUCK" !!
  • Matt B (unregistered) in reply to travisowens
    travisowens:
    There's no way this is faster... somebody benchmark this please!


    omg I think you unraveled the mystery of why this is posted on a site called "The Daily WHAT THE FUCK" !!
  • (cs)

    This is slightly aimed at the people bringing up the BOOST library to explain the motives of this WTF creator...

    even IF and only IF this person had intended to speed-up the multiplication of numbers, this method would NOT accomplish it.  Dusting off my computer engineering degree a bit, I took a class in which we studied at least 5 different kinds of hardware multipliers.  Only the most basic multiplier taught to first-year students would attempt to implement a multiplier by doing a series of additions.

    I actually took a look at this so called "BOOST" library.  Biggest load of crap ever.  First of all, all these so-called "optimizations" will be accomplished in most modern compilers, and the rest will be implemented in HARDWARE.  You think the boys at Intel and AMD haven't tried to make a better multiplier?  You know how to write a better compiler than Intel?  Even the Microsoft compiler is pretty gosh-darn spiffy.

    The only "optimizer" for multiplication that I use is using a left shift for multiplying by power-of-two numbers, and even then, only when speed is of the essense in Debug mode, since that optimization will be done by the compiler for you in Release mode.

    The only use for OCD (obessive compulse disorder) optimization code and metaprogramming is for things like cell-phone apps, when the hardware is likely not optimized because of die-space and power consumption.  Even in time critical apps, human optimization will likely break other compiler optimizations. 

  • (cs) in reply to Anonymous Pang Mao

    I think that excerpt refers to portability problems that the Boost MPL's implementation faced, rather than problems you can fix by using Boost MPL... but yeah, the conclusion is hard to avoid that this is just a WTF. :)

    Templates are hard stuff, though, perhaps the author was just trying to teach herself how to use the syntax within the confines of a problem where the results were immediately available and understandable.  God knows I was guilty of the same ;)

  • JGW (unregistered) in reply to Charles Nadolski

    I actually took a look at this so called "BOOST" library. Biggest load of crap ever.<p>

    Before you cast aspersions, may I suggest that you examine the whole Boost library, and dismount your high-horse. It is far from being - as you put it - the, "Biggest load of crap ever", and is, perhaps, more correctly described as "One of the most frequently used C++ libraries, much of which is likely to be made part of the C++ standard"<p>

    Furthermore, if:<p>

    The only "optimizer" for multiplication that I use is using a left shift for multiplying by power-of-two numbers, and even then, only when speed is of the essense in Debug mode, since that optimization will be done by the compiler for you in Release mode.

    Then you are putting far too much of your attention in the debug builds that your compiler generates. I'm curious as to the situations where you build a debug version that states (for example) i =<< 1, but in the release build states that i *= 2. Is your code really littered with:

    int i(10);

    ...

    #ifdef DEBUG

    i <<= 1;

    #else

    i *= 2;

    #endif

    The only use for OCD (obessive compulse disorder) optimization code and metaprogramming is for things like cell-phone apps, when the hardware is likely not optimized because of die-space and power consumption. Even in time critical apps, human optimization will likely break *other* compiler optimizations.

    You've never done embedded software, have you ?

  • (cs) in reply to JGW
    Anonymous:

    You've never done embedded software, have you ?


    If you bothered to read the rest of my post, I said that it's main use would precisely be in embedded software.

  • (cs) in reply to JGW
    Anonymous:
    Before you cast aspersions, may I suggest that you examine the whole Boost library, and dismount your high-horse. It is far from being - as you put it - the, "Biggest load of crap ever", and is, perhaps, more correctly described as "One of the most frequently used C++ libraries, much of which is likely to be made part of the C++ standard"


    Definitely.  There's a lot of useful stuff in it!  The BOOST_STATIC_ASSERT alone can save you tons and tons of headaches, when other programmers on your project cheerfully start adding to one structure that NEEDS to have a certain memory layout; or when they insert new values at the wrong place in an enum... or when you port to another architecture that ruins your size assumptions.

    I don't know that I would classify metaprogramming under 'OCD optimization', it seems to be more of a set of tools that allow you to write more flexible code that's potentially more resistant to accidental breakage by uncomprehending third parties.  Whether or not you actually use it for that effect, though..
  • Casey (unregistered)

    This is utterly ridiculous for more reasons than compile time. As stated, the template only takes constants. Forget writing out the math or any wrapper at all. There is no way that this is faster anywhere than just writng 20. No complier optimizing, no math, just write the constant.

    Programming is supposed to require a basic knowledge of mathematics, including how to multiply for yourself, or at least use a calculator. The only reason to ever rely on the compiler/optimizer for any constant math is the occasional readability improvement, which this solution kills.

  • DjDanny (unregistered)

    <FONT face=Arial>WTF?! Your spelling of the word accommodate is atroshuss!</FONT>

  • Fregas (unregistered) in reply to dubwai
    dubwai:

    The biggest WTF about this is that it's a code obfuscation that accomplishes less than nothing.  Something I see all the time.

    I think you hit the nail  right on the head.  Was the supposed "Extra performance" at compile time really worth the work and code obfuscation? Why do developers insist on making the simple ridculously complex?

  • (cs) in reply to snowyote
    snowyote:


    Definitely.  There's a lot of useful stuff in it!  The BOOST_STATIC_ASSERT alone can save you tons and tons of headaches, when other programmers on your project cheerfully start adding to one structure that NEEDS to have a certain memory layout; or when they insert new values at the wrong place in an enum... or when you port to another architecture that ruins your size assumptions.

    I don't know that I would classify metaprogramming under 'OCD optimization', it seems to be more of a set of tools that allow you to write more flexible code that's potentially more resistant to accidental breakage by uncomprehending third parties.  Whether or not you actually use it for that effect, though..


    Why would a structure NEED to have a certain layout, unless you were doing some crazy pointer arithmatic, were ignoring the sizeof() command, or abusing the MS serialization function?  Isn't that the whole point of object oriented programming is that you *won't* break an object when adding members to it?
    Again, why would enums be affected by adding values... since the whole point of enum is that you can change the numeric value of an enum without breaking anything?
    Again, if you avoid pointer arithmetic and use sizeof(), you won't have any size assumptions...

    I don't know.  Their MPL library just isn't for me... sorry if I come off as a nay-sayer.  If it turns you on, go ahead and use it...

    Just out of curiousity though I took a look at the other stuff on the boost.org site, and their graph library *does* look interesting...

  • (cs) in reply to Fregas

    Fregas:

    Why do developers insist on making the simple ridculously complex?

     

    Because it makes their e-penises bigger.

  • (cs)

    The Boost metaprogramming library has quite a few applications - but none of them has anything to do with what the author of this snippet was trying to accomplish.

    The author tried to speed up something that doesn't need speeding up, namely, compile-time constant-folding.

    The MPL is a library made for the use of metaprograms. This is very much like the STL was made for the use of programs. The STL is not a program in its own right, and does not do anything useful by itself. Similarly, one will rarely find a useful application of the MPL within a conventional C++ program. The MPL was written in order to simplify writing meta-programs. Meta-programs such as some parts of the iterator facade and adapter libraries. Such as the Spirit parser library. Such as, although these things don't use the MPL, Loki's object factories, generic visitors and multimethods.
    These libraries are not about optimization. They are about making code clearer and easier for the programmer. The libraries are often ugly, and their compilation can be slow, but the client code is beautiful and simple.

    If you understood how these things work, how the MPL can make their writing easier, then you wouldn't be so quick to diss Boost.MPL.

    The author of this WTF (and it is a big one) simply got caught up in the excitement of first learning about template metaprogramming, without understanding its applications and limitations.

    Oh, before I forget - I'm sure the Blitz++ users would be very interested in knowing that template-based optimizations are only important for embedded systems ...

  • (cs) in reply to Charles Nadolski

    Some achieve crazy pointer arithmetic, and some have crazy pointer arithmetic forced upon them... :)  We've got a templatized linked-list class that assumes that its NODE type has m_pNext as its first member, so when it's removing the list, it can just do something like this:

            // Since m_pNext is first element of NODE, we can cheat and pretend...
            NODE* pLast = (NODE*)(&(m_pHead));

    This lets you factor out the special-case when you're removing the head of the list.  Whether this is 'good code' or has a performance benefit to justify its memory layout assumptions is arguable, but it was like that when I got there.

    For an example of why it's handy to be able to enforce enum values at compile time, we have an enum that looks like the following:

    enum eFooBitField {
      kFooBitField_HasEars,
      kFooBitField_HasWings,
      kFooBitField_GetsDizzyALot,
      ... (snip) ...
      kNumFooBitFields
    };

    We then have a template class that's essentially a 'bit array', which lets you test/set individual bits, and takes as an argument the number of bits it holds.  So we'll instantiate BitArray<kNumFooBitFields>.  When we serialize it, we just bitcopy the data.  If someone serialized out that BitArray when kNumFooBitFields == 96, then added a bit, and tried to serialize it back in, it would eat a word that it wasn't supposed to.  It's nice to be able to put a compile-time assert in there to make sure that the sizeof hasn't changed - it only changes every 32 bits, at which point someone needs to add code to make sure that older streams serialize in correctly.  We're lucky that having new bits default to zero is acceptable behavior in this application.  It's definitely a bit gross, but it would be worse without static error-checking.

    You can also use compile-time asserts if you just know that a clever junior programmer is going to try to give a class a vftable when it needs to be passed around on the stack pretty frequently, and you want to bring that to their attention if they try to change it... or if you have a class (like a 3-d float vector) that for efficiency purposes needs to be bit-compatible with a comparable class defined in a middleware library, but you still want to use your own class internally in case you decide to switch over to other middleware later (or you want to be informed you're no longer bit-compatible if/when they change the layout in a library revision).  In an imperfect world there are all kinds of reasons why a little compile-time reassurance helps :)

    I don't use Boost::MPL personally or professionally, mind, but I appreciate their efforts, and it's always neat to see what you can do at compile-time, and maybe take home some lessons that you use yourself.  It can save you a lot of tears in the long run!

  • (cs) in reply to CornedBee
    CornedBee:
    The Boost metaprogramming library has quite a few applications - but none of them has anything to do with what the author of this snippet was trying to accomplish.

    The author tried to speed up something that doesn't need speeding up, namely, compile-time constant-folding.

    The MPL is a library made for the use of metaprograms. This is very much like the STL was made for the use of programs. The STL is not a program in its own right, and does not do anything useful by itself. Similarly, one will rarely find a useful application of the MPL within a conventional C++ program. The MPL was written in order to simplify writing meta-programs. Meta-programs such as some parts of the iterator facade and adapter libraries. Such as the Spirit parser library. Such as, although these things don't use the MPL, Loki's object factories, generic visitors and multimethods.
    These libraries are not about optimization. They are about making code clearer and easier for the programmer. The libraries are often ugly, and their compilation can be slow, but the client code is beautiful and simple.

    If you understood how these things work, how the MPL can make their writing easier, then you wouldn't be so quick to diss Boost.MPL.

    The author of this WTF (and it is a big one) simply got caught up in the excitement of first learning about template metaprogramming, without understanding its applications and limitations.

    Oh, before I forget - I'm sure the Blitz++ users would be very interested in knowing that template-based optimizations are only important for embedded systems ...


    Thanks for the info, I'll keep a more open mind next time around, and avoid flaming before hitting the "Post" button ;)

    I remember running into Blitz++ once.  We didn't have the time or budget to take a serious look at it for this time-critical app I had to re-write... At one point I was tranlating fortran DLLs into native C++.   Why you ask?  Well, well-written C++ is ALWAYS faster than WTF-worthy FORTRAN :D  I wish I could post the code here but I don't want to get into trouble...
  • (cs) in reply to snowyote
    snowyote:

    For an example of why it's handy to be able to enforce enum values at compile time, we have an enum that looks like the following:

    enum eFooBitField {
      kFooBitField_HasEars,
      kFooBitField_HasWings,
      kFooBitField_GetsDizzyALot,
      ... (snip) ...
      kNumFooBitFields
    };

    We then have a template class that's essentially a 'bit array', which lets you test/set individual bits, and takes as an argument the number of bits it holds.  So we'll instantiate BitArray<knumfoobitfields>.  When we serialize it, we just bitcopy the data.  If someone serialized out that BitArray when kNumFooBitFields == 96, then added a bit, and tried to serialize it back in, it would eat a word that it wasn't supposed to.  It's nice to be able to put a compile-time assert in there to make sure that the sizeof hasn't changed - it only changes every 32 bits, at which point someone needs to add code to make sure that older streams serialize in correctly.  We're lucky that having new bits default to zero is acceptable behavior in this application.  It's definitely a bit gross, but it would be worse without static error-checking.


    Bit fields are nasty things :) Have you considered using built-in bitfields for C++? They may or may not help you:

    </knumfoobitfields>
    <knumfoobitfields>struct Date
    {
       unsigned nWeekDay  : 3;    // 0..7   (3 bits)
       unsigned nMonthDay : 6;    // 0..31  (6 bits)
       unsigned nMonth    : 5;    // 0..12  (5 bits)
       unsigned nYear     : 8;    // 0..100 (8 bits)
    };</knumfoobitfields>
    <knumfoobitfields></knumfoobitfields>
    <knumfoobitfields></knumfoobitfields>
    <knumfoobitfields>I also had a superlong post about the serialization solution at work that I came up with that requires NO versioning, and you can make any changes you wish to a class and have forward and reverse reading compatibility.  Unforunately the WTF-worthy forum software ate my post.  I can send you a code example if interested ... firstname.lastname at google's email thingie.
    </knumfoobitfields>
  • anon (unregistered) in reply to Charles Nadolski

    Charles Nadolski:
    well-written C++ is ALWAYS faster than WTF-worthy FORTRAN :D  I wish I could post the code here but I don't want to get into trouble...

    I hate to take issue, but Fortran is actually likely to be faster - why? because the compilation problem is much more complete (especially in the case of F-77 - none of that nasty dynamic memory stuff [:P] ). Don't forget that Fortan is still one of the primary languages of the scientific community becasue of this. Generally though I'd agree with you that Fortan is a nightmare (and steps should be taken to erradicate it from the face of software engineering etc, etc)

  • Top Cod3r (unregistered)

    Sure, its easy to criticize this code taken out of context, but perhaps the platform this code was running on is not efficient at doing multiplication, or maybe the developer wanted to avoid the security holes that are notoriously in standard multiplication routines.

  • joe_bruin (unregistered) in reply to Charles Nadolski
    Charles Nadolski:

    struct Date
    <knumfoobitfields>{
       unsigned nWeekDay  : 3;    // 0..7   (3 bits)
       unsigned nMonthDay : 6;    // 0..31  (6 bits)
       unsigned nMonth    : 5;    // 0..12  (5 bits)
       unsigned nYear     : 8;    // 0..100 (8 bits)
    };</knumfoobitfields>
    <knumfoobitfields></knumfoobitfields>
    <knumfoobitfields></knumfoobitfields>
    <knumfoobitfields></knumfoobitfields>


    Year, 0-100?  Talk about a WTF.  Perhaps you could put your 22 unused bits to good use (14 bits to the year 16383).  And since when does a week have 8 days, a month 32, and a year 13 months?
  • (cs)

    A couple of years ago I encountered something similar. A few friends asked me if I could help them because their program was sort of slow. They said they did anything that they could think of and they even wrote everything using assembler so they can speed it up. They even optimized the assembler code to use only registers (wherever possible) etc. etc. However with the dan 800mb files it was working too slow. After all the explanation what they are actually trying to achive I figured out that if they change the algorytm and instead of scanning each time they take a bit more memory to index their data and then use quick sort it won't take significantly more memory, however it dropped the execution time from hours to a couple of seconds (almost instantly - reading the file was the slowest operation)

  • John (unregistered) in reply to Top Cod3r
    Anonymous:

    Sure, its easy to criticize this code taken out of context, but perhaps the platform this code was running on is not efficient at doing multiplication, or maybe the developer wanted to avoid the security holes that are notoriously in standard multiplication routines.



    WTF?!  It has nothing to do with whether the platform is efficient at doing multiplication or not.  All of this happens at compile time.  It simply won't matter how fast the platform is at multiplication since it will never do at run-time.  In addition, there are many compilers that would probably barf when using template recursion like this(VC6 comes to mind).  Likewise, any compiler that would handle this recursive template pattern would, with less effort, multiply your constants for you at compile time anyway.  And if that weren't enough, this thing will fail if you try to use a negative number.  So, to recap,. this:

    1. Might fail to work all-together on some compilers
    2. Will do the same thing, but slower, on good compilers
    3. Will run at the EXACT SAME speed at run-time either way
    4. Will blow up if you use a negative number.

    In what parallel universe could this atrocity possibly be justified?
  • Jon (unregistered)

    The template could have been rewritten like this:

    <font size="1">template <int firstval, int secondval><int firstval="" int="" secondval=""></int>
    struct QuickMultiply
    {
        static const int value = firstval * secondval;
    };</font>

    Had he written it this way, he might have figured it out sooner...

  • danielsn (unregistered) in reply to Charles Nadolski
    Charles Nadolski:
    snowyote:


    Definitely.  There's a lot of useful stuff in it!  The BOOST_STATIC_ASSERT alone can save you tons and tons of headaches, when other programmers on your project cheerfully start adding to one structure that NEEDS to have a certain memory layout; or when they insert new values at the wrong place in an enum... or when you port to another architecture that ruins your size assumptions.

    I don't know that I would classify metaprogramming under 'OCD optimization', it seems to be more of a set of tools that allow you to write more flexible code that's potentially more resistant to accidental breakage by uncomprehending third parties.  Whether or not you actually use it for that effect, though..


    Why would a structure NEED to have a certain layout, unless you were doing some crazy pointer arithmatic, were ignoring the sizeof() command, or abusing the MS serialization function?  Isn't that the whole point of object oriented programming is that you *won't* break an object when adding members to it?
    Again, why would enums be affected by adding values... since the whole point of enum is that you can change the numeric value of an enum without breaking anything?
    Again, if you avoid pointer arithmetic and use sizeof(), you won't have any size assumptions...

    I don't know.  Their MPL library just isn't for me... sorry if I come off as a nay-sayer.  If it turns you on, go ahead and use it...

    Just out of curiousity though I took a look at the other stuff on the boost.org site, and their graph library *does* look interesting...

    Well, a structure might need the layout to ensure that it meets memory alignment. Logically, the functionality of the structure would be unchanged if you reorganize the struct - but most people wouldn't consider an increase in memory usage to be unchanged functionality.

    Similarly, sometimes one wants to use enums to index into an array. If you have an enum that ends with MAX_ENUM_VALUE, you can just size the array based on MAX_ENUM_VALUE, and know that all the enums will fit. Then some fool comes along and adds an enum to the list, after the MAX_ENUM_VALUE, and suddenly you've overflowed a buffer.

  • (cs) in reply to Charles Nadolski

    So what was the final solution? A regular expression run across all the source to change everything into real multiplies?

    Charles Nadolski:
    Why would a structure NEED to have a certain layout, unless you were doing some crazy pointer arithmatic, were ignoring the sizeof() command, or abusing the MS serialization function?  Isn't that the whole point of object oriented programming is that you *won't* break an object when adding members to it?

    If I were feeling bitchy I would say, standardized wire protocols must really throw you, eh? ^_~ But I know better. I've known guys who enter a new field in version 2.36153 or something and put out a new client and server, and when the new server kills all the old clients or vice versa, their solution is to just upgrade everything! And they never tell you until you call to complain.

    Anonymous:
    And since when does a week have 8 days, a month 32, and a year 13 months?

    C'mon, dude, some companies store these things 0-based, some 1-based. (And wtfs like java both-based.) Might as well not pound that particular square peg into your round hole unnecessarily.
  • (cs) in reply to joe_bruin
    Charles Nadolski:

    struct Date
    <KNUMFOOBITFIELDS>{
       unsigned nWeekDay  : 3;    // 0..7   (3 bits)
       unsigned nMonthDay : 6;    // 0..31  (6 bits)
       unsigned nMonth    : 5;    // 0..12  (5 bits)
       unsigned nYear     : 8;    // 0..100 (8 bits)
    };</KNUMFOOBITFIELDS>
    <KNUMFOOBITFIELDS></KNUMFOOBITFIELDS>
    <KNUMFOOBITFIELDS></KNUMFOOBITFIELDS>

    <KNUMFOOBITFIELDS></KNUMFOOBITFIELDS>


    I understand and like the idea of this, except for one thing... Why use 5 bits for 0 - 12? 4 bits is enough for that purpose is it not? Or am I missing the obvious (which is very possible).

    Does the compiler/runtime understand enough to place these structs one after the other to actually warrant saving 10 (or 11 if using 4 bits for year... ah is that why it is 5, to create an even size for the struct??) bits on making the year more y2100 proof?

    Drak

  • Simon (unregistered) in reply to John

    Charles Nadolski wrote, in the context of his boost library rant (that I tend to agree with in most cases) that human optimisation will most likely break compiler optimisations:

    In most cases (sadly, the ones where optimising tends to get used), this is true. However, the point of human based optimisations is supposed to be that we are smarter than the compiler. We use profiling, and then using the ability to 'step away from the problem' we look at it from a higher level, and optimise accordingly. then we profile again, and fix the performance fuckups we have added. lather, rinse, repeat.

    Of course, in reality, what happens is someone thinks "Hmm. Compilation of multiplications is what's slowing down my application" (assuming, rather generously, that the application in question compiles a lot of stuff that it then uses) and so writes some impenetrable and stupid template code to 'optimise' it. Of course, that "someone" is the resident C++ guru who can do no wrong, nobody questions the decision, and the code goes to production. By the time anyone realises what a completely idiotic idea it is, it's ingrained in everything and almost impossible to remove.

    Then someone else mentioned that stuff like this might be useful to protect you from people adding members to structs and the like and blowing memory alignment, completely missing the point that OOP (which C++ sort of is) is generally used to avoid having to worry about such stuff, and that your unit tests should catch such problems before they become a problem. Oh. Scratch that. C++ programmers don't unit test, do they? Unit testing is for the less_than_elite.

    And finally, Anonymous. Nice troll. "Its easy to criticize this code taken out of context". Hahahahaha

    Simon

  • mbc (unregistered) in reply to danielsn
    Anonymous:
    Similarly, sometimes one wants to use enums to index into an array. If you have an enum that ends with MAX_ENUM_VALUE, you can just size the array based on MAX_ENUM_VALUE, and know that all the enums will fit. Then some fool comes along and adds an enum to the list, _after_ the MAX_ENUM_VALUE, and suddenly you've overflowed a buffer.

    That's why you need to, *gasp*, document your solutions and comment your code. A single line of comments might be all it takes to prevent some fool shooting himself in the foot while trying to avoid triggering your WTF. ;)

  • StuP (unregistered) in reply to Casey
    Anonymous:
    This is utterly ridiculous for more reasons than compile time. As stated, the template only takes constants.


    Hmmmm.... hate to disagree (and this in no way justifies rewriting multiplication as a recursive addition... sheesh!) but C++ templates can be used with non-constant parameters; hence, it won't always be compiler optimised. The following could also be used:

    int i,j;
    cin >> i >> j;
    cout << QuickMultiply<i,j>::value

    Of course, it would slower than a wet week at doing a simple multiplication... but that's the WTF, not that it can only be used with constants so it would always be compiler optimised.

    <i ,j="">

  • Martin Vilcans (unregistered)

    This is my first post, so I hope it isn't messed up by this WYSIWYG editor...

    I'm amazed at how often people "know" how to optimize code, often based on something they heard somewhere from someone which may or may not have been true on a 20 year old compiler. The only way to know is profiling. Looking at the compiler output is often just as good.

    This WTF is a good example.

    I tried the following code:

    int multiply1() {
        return QuickMultiply<4,5>::value;
    }

    int multiply2() {
        return 4 * 5;
    }

    int multiply3() {
        return 20;
    }

    All of these will return 20, and multiply1 is the fastest of them... Right... Here's the compiler output from a pretty old GCC compiling for the ARMI processor:

        .global    multiply1__Fv
    multiply1__Fv:
        @ args = 0, pretend = 0, frame = 0
        @ frame_needed = 0, current_function_anonymous_args = 0
        mov    r0, #20
        bx    lr
        .align    0
        .global    multiply2__Fv
    multiply2__Fv:
        @ args = 0, pretend = 0, frame = 0
        @ frame_needed = 0, current_function_anonymous_args = 0
        mov    r0, #20
        bx    lr
        .align    0
        .global    multiply3__Fv
    multiply3__Fv:
        @ args = 0, pretend = 0, frame = 0
        @ frame_needed = 0, current_function_anonymous_args = 0
        mov    r0, #20
        bx    lr


    So these three functions compiled to the very same code (two instructions: mov r0, #20 / bx lr). Any decent compiler would do the same.
  • BogusDude (unregistered) in reply to danielsn
    Anonymous:
    Similarly, sometimes one wants to use enums to index into an array. If you have an enum that ends with MAX_ENUM_VALUE, you can just size the array based on MAX_ENUM_VALUE, and know that all the enums will fit. Then some fool comes along and adds an enum to the list, _after_ the MAX_ENUM_VALUE, and suddenly you've overflowed a buffer.

    Dude, using enums to index an array is a WTF ! Using MAX_ENUM_VALUE does not have to be a WTF, but then you loop until MAX_ENUM_VALUE so that even if someone adds something at the end, you don't overflow the buffer. If you overflow a buffer, don't blame the enum, blame the idiot who used the enum incorrectly.

  • enska (unregistered) in reply to Charles Nadolski
    Charles Nadolski:

    <knumfoobitfields>
    Bit fields are nasty things :) Have you considered using built-in bitfields for C++? They may or may not help you:

    </knumfoobitfields>
    <knumfoobitfields>struct Date
    {
       unsigned nWeekDay  : 3;    // 0..7   (3 bits)
       unsigned nMonthDay : 6;    // 0..31  (6 bits)
       unsigned nMonth    : 5;    // 0..12  (5 bits)
       unsigned nYear     : 8;    // 0..100 (8 bits)
    };</knumfoobitfields>
    <knumfoobitfields></knumfoobitfields>
    <knumfoobitfields></knumfoobitfields>
    <knumfoobitfields></knumfoobitfields>


    Don't forget the possible problems of bitfields in the context of multithreaded programs, more specifically rewriting of adjacent data. :P (And why the f***k, has posting been made so goddamn difficult... "something didn't quite work out...", yeah yeah, bitch bitch)
  • javascript jan (unregistered) in reply to smitty_one_each
    smitty_one_each:
    Anonymous:
    Ok, here's a more specific boost link: http://www.boost.org/libs/mpl/doc/refmanual/metafunctions.html Here's the relevant bit: "All other considerations aside, as of the time of this writing (early 2004), using built-in operators on integral constants still often present a portability problem — many compilers cannot handle particular forms of expressions, forcing us to use conditional compilation. Because MPL numeric metafunctions work on types and encapsulate these kind of workarounds internally, they elude these problems, so if you aim for portability, it is generally adviced to use them in the place of the conventional operators, even at the price of slightly decreased readability." So, theoretically, they could be trying to avoid conditional compilation to handle cross-platform issues, in lieu of just doing the multiplication in their heads... nevertheless I still think it's a WTF ;-)


    C++ templates form a turing-complete, compile-time language in their own right.
    This wasn't the intention of Bjarne Stroustrup.
    I think he deserves some special spot in the WTF Hall of Fame? for that, in a not-too-disrespectful way.


    Is that (strictly speaking, and what other kind of speaking is there?) true? - I thought that C++ guaranteed only a (fairly small - seven or so?) minimum level of template nesting that was guaranteed to be expanded by all compilers. So that would be a large space, but hardly "turing complete".
  • Gambort (unregistered) in reply to anon

    I think you'll find that one of the reasons that it is still used in the scientific community is that peoples bosses understand it and scientists are not programmers and don't want to learn new languages ;-). That is said as someone who has had to use Fortran for scientific work.

    The compiler benifits can be exploited (if you know how) but since they typically run a NAG like library you could always just link to NAG in C++ for example. Furthermore, if you want to run FFT's, Fortran will suck and you have to call a C routine (where is the quick bit arithmetic in Fortran?)

    And WTF-worthy Fortran will be slower than C. Where Fortran is faster is in linear-algebra.

  • (cs) in reply to Charles Nadolski
    Charles Nadolski:
    ...well-written C++ is ALWAYS faster than WTF-worthy FORTRAN...


    Ha, ha, ha! You misspelled Visual Basic.  Funny! :)

        dZ.
  • Will Varfar (unregistered) in reply to DZ-Jay

    struct Date
    {
       unsigned nWeekDay  : 3;    // 0..6   (3 bits)
       unsigned nMonthDay : 6;    // 0..30  (6 bits)
       unsigned nMonth    : 5;    // 0..11  (5 bits)
       unsigned nYear     : 8;    // 0..99 (8 bits) is this enough?
    };


  • (cs) in reply to StuP

    To those who say you can instansiate templates with non-constant values, err... no, you really can't do that. In fact, if you try, the error you get (in g++) is:

    error: 'i' cannot appear in a constant-expression.

    While I know people always try to defend WTFs, as the person who submitted this (woo! my 5 minutes of fame!), it really is pointless :)

    gcc, icc, vc++6 and 7 all turn:

    4*5 and QuickMultiply<4,5>::value into 20 in the assembler output at any decent optimisation level.

    Note that just replacing this with 20 is (perhaps) not possible, as it was being used to multiply constants together... but there is still no reason to just multiply them :)

  • (cs)

    I think this is one of the biggest WTFs I've seen in a while. As other have pointed out, from simply the point of view of code maintainability this is a nightmare.

    Having been a contractor for years and having worked on large projects I can tell you it is things like this that end up getting programs completely rewritten because it is quicker to start from scratch than try to deal with someones FKed, tricky way of coding something just to be "clever".

    Makes me cringe just to look at it.

Leave a comment on “Becoming A WTF Believer”

Log In or post as a guest

Replying to comment #:

« Return to Article