• Loz (unregistered) in reply to Jon

    I think your method will behave somewhat differently on negative inputs than the original.

  • (cs) in reply to Charles Nadolski
    Charles Nadolski:
    <knumfoobitfields>Bit fields are nasty things :) Have you considered using built-in bitfields for C++? They may or may not help you:


    I've never actually used them, but I've seen them used a long time ago.  Since we're usually using bitfields to store a bunch of flags, here, I haven't come across a situation where it's really worth it to have the full flexibility of specific-precision integers, and I like being able to refer to bits by a subscript (so I can loop over them, etc.) but thanks for reminding me this is out there. C++ is such a huge language.</knumfoobitfields>
    <knumfoobitfields></knumfoobitfields>
    <knumfoobitfields></knumfoobitfields>
    Charles Nadolski:
    <knumfoobitfields>I also had a superlong post about the serialization solution at work that I came up with that requires NO versioning, and you can make any changes you wish to a class and have forward and reverse reading compatibility.  Unforunately the WTF-worthy forum software ate my post.  I can send you a code example if interested ... firstname.lastname at google's email thingie.</knumfoobitfields>


    D'oh!  That sounds -really- promising, though, I'd like to see how you do that. :)
  • Jester (unregistered) in reply to StuP
    Anonymous:
    Anonymous:
    This is utterly ridiculous for more reasons than compile time. As stated, the template only takes constants.


    Hmmmm.... hate to disagree (and this in no way justifies rewriting multiplication as a recursive addition... sheesh!) but C++ templates can be used with non-constant parameters; hence, it won't always be compiler optimised. The following could also be used:

    int i,j;
    cin >> i >> j;
    cout << QuickMultiply::value

    <i ,j="">

    Of course, it would slower than a wet week at doing a simple multiplication... but that's the WTF, not that it can only be used with constants so it would always be compiler optimised.

    <i ,j="">
    <i ,j="">



    I bet you didn't try that, right?

    The compiler will complain because i and j are not constants. Template Parameters need to be Compiler-Time Constants<i ,j="">

  • (cs) in reply to Jester

    I think the WTF here begins with the fact that this is an optimization for multiplying small numbers. Even if the compiler didn't do the constant multiplication for you, WTF is so hard about the coder doing it? Something like:

    i = 20;   \ hey, dumbass, this is 4*5

    ????????

  • (cs) in reply to anon
    Anonymous:

    Charles Nadolski:
    well-written C++ is ALWAYS faster than WTF-worthy FORTRAN :D  I wish I could post the code here but I don't want to get into trouble...

    I hate to take issue, but Fortran is actually likely to be faster - why? because the compilation problem is much more complete (especially in the case of F-77 - none of that nasty dynamic memory stuff [:P] ). Don't forget that Fortan is still one of the primary languages of the scientific community becasue of this. Generally though I'd agree with you that Fortan is a nightmare (and steps should be taken to erradicate it from the face of software engineering etc, etc)



    I'd normally agree with you, except for the fact that the fortran code was WTF.  Really, I timed both versions of the program to make sure.  There's this array of distance data, let's call it Distances.  The fortran code, instead of predicting, just looped around the array until it reached distance X to obtain the right index.  Given that this distance might  be at index 10,000, it was highly inefficient.  I re-wrote it using a distance predicition routine that attempted to close in on the index in 5 iterations (example: target distance is 1000, difference between i and i+1 is 1.  new i is predicted to be 1000).  If it failed to reach local minima in 5 iterations (if it bounced around the minima instead of reaching it), it finishes off the function with a regular linear search.  Oh and the Fortran version didn't even start from the last known point.  Now tell me, which code would be faster?
  • (cs) in reply to Drak
    Drak:
    Charles Nadolski:

    struct Date
    <knumfoobitfields>{
       unsigned nWeekDay  : 3;    // 0..7   (3 bits)
       unsigned nMonthDay : 6;    // 0..31  (6 bits)
       unsigned nMonth    : 5;    // 0..12  (5 bits)
       unsigned nYear     : 8;    // 0..100 (8 bits)
    };</knumfoobitfields>
    <knumfoobitfields></knumfoobitfields>
    <knumfoobitfields></knumfoobitfields>

    <knumfoobitfields></knumfoobitfields>


    I understand and like the idea of this, except for one thing... Why use 5 bits for 0 - 12? 4 bits is enough for that purpose is it not? Or am I missing the obvious (which is very possible).

    Does the compiler/runtime understand enough to place these structs one after the other to actually warrant saving 10 (or 11 if using 4 bits for year... ah is that why it is 5, to create an even size for the struct??) bits on making the year more y2100 proof?

    Drak



    People people calm down! It's only an example!  No there aren't supposed to be 13 months in a year, no there aren't supposed to be 32 days in a month, and yes, it would be a bad idea only to store 100 years.  You guys sure are jumpy today!  Obviously, you would optimized the number of bits to whatever application you have.  And yes you need to use 5 bits, if your months start at one instead of zero.
  • (cs) in reply to snowyote
    snowyote:
    <knumfoobitfields></knumfoobitfields>
    <knumfoobitfields></knumfoobitfields>
    <knumfoobitfields></knumfoobitfields>
    Charles Nadolski:
    <knumfoobitfields>I also had a superlong post about the serialization solution at work that I came up with that requires NO versioning, and you can make any changes you wish to a class and have forward and reverse reading compatibility.  Unforunately the WTF-worthy forum software ate my post.  I can send you a code example if interested ... firstname.lastname at google's email thingie.</knumfoobitfields>


    D'oh!  That sounds -really- promising, though, I'd like to see how you do that. :)


    Just email me dude and I'll send it to you! (hint: look at the last sentence above what you wrote :D)
  • (cs) in reply to spotcatbug
    spotcatbug:
    I think the WTF here begins with the fact that this is an optimization for multiplying small numbers. Even if the compiler didn't do the constant multiplication for you, WTF is so hard about the coder doing it? Something like:

    i = 20;   \\ hey, dumbass, this is 4*5

    ????????


    magic numbers, that's why

    #define FOO_SIZE 8
    #define FOO_COUNT 42

    And somewhere else, using
    i = FOO_SIZE * FOO_COUNT;
    is better than
    i = 336;     // 8*42
  • (cs) in reply to AJR
    AJR:
    spotcatbug:
    I think the WTF here begins with the fact that this is an optimization for multiplying small numbers. Even if the compiler didn't do the constant multiplication for you, WTF is so hard about the coder doing it? Something like:

    i = 20;   \\ hey, dumbass, this is 4*5

    ????????


    magic numbers, that's why

    #define FOO_SIZE 8
    #define FOO_COUNT 42

    And somewhere else, using
    i = FOO_SIZE * FOO_COUNT;
    is better than
    i = 336;     // 8*42


    Bingo... but I still don't see any advantage to
    i = QuickMultiply<FOO_SIZE,FOO_COUNT>::value;
    over
    i = FOO_SIZE * FOO_COUNT;
    Surely if multiplication by adding numbers is so much faster, the compiler should be able to do that itself rather than being forced like this.
  • (cs) in reply to AJR
    AJR:
    spotcatbug:
    I think the WTF here begins with the fact that this is an optimization for multiplying small numbers. Even if the compiler didn't do the constant multiplication for you, WTF is so hard about the coder doing it? Something like:

    i = 20;   \\ hey, dumbass, this is 4*5

    ????????


    magic numbers, that's why

    #define FOO_SIZE 8
    #define FOO_COUNT 42

    And somewhere else, using
    i = FOO_SIZE * FOO_COUNT;
    is better than
    i = 336;     // 8*42

    Oh yeah. Duh. Forgot about the precompiler.

    Maybe they figured the compiler lost the abilitiy to optimize constant arithmetic operations. You know, to make room for new, cooler features like recursive template instantiation.

  • (cs) in reply to spotcatbug

    OMG! TEH QUOTE WORKED!

  • (cs) in reply to StuP
    Anonymous:

    Hmmmm.... hate to disagree (and this in no way justifies rewriting multiplication as a recursive addition... sheesh!) but C++ templates can be used with non-constant parameters; hence, it won't always be compiler optimised. The following could also be used:

    int i,j;
    cin >> i >> j;
    cout << QuickMultiply::value

    <i ,j="">

    Of course, it would slower than a wet week at doing a simple multiplication... but that's the WTF, not that it can only be used with constants so it would always be compiler optimised.<i ,j="">
    <i ,j="">




    Huh?  Which C++ compiler are you using that allows templates to be instantiated at run-time?<i ,j="">
    <i ,j="">

  • (cs) in reply to javascript jan
    Anonymous:
    smitty_one_each:
    Anonymous:
    Ok, here's a more specific boost link: http://www.boost.org/libs/mpl/doc/refmanual/metafunctions.html Here's the relevant bit: "All other considerations aside, as of the time of this writing (early 2004), using built-in operators on integral constants still often present a portability problem — many compilers cannot handle particular forms of expressions, forcing us to use conditional compilation. Because MPL numeric metafunctions work on types and encapsulate these kind of workarounds internally, they elude these problems, so if you aim for portability, it is generally adviced to use them in the place of the conventional operators, even at the price of slightly decreased readability." So, theoretically, they could be trying to avoid conditional compilation to handle cross-platform issues, in lieu of just doing the multiplication in their heads... nevertheless I still think it's a WTF ;-)


    C++ templates form a turing-complete, compile-time language in their own right.
    This wasn't the intention of Bjarne Stroustrup.
    I think he deserves some special spot in the WTF Hall of Fame? for that, in a not-too-disrespectful way.


    Is that (strictly speaking, and what other kind of speaking is there?) true? - I thought that C++ guaranteed only a (fairly small - seven or so?) minimum level of template nesting that was guaranteed to be expanded by all compilers. So that would be a large space, but hardly "turing complete".


    That's an implementation-specific limit.  17, IIRC, according to a comp.lang.c++.moderated post.  A little research here indicates the answer is likely "not strictly <a href='http://en.wikipedia.org/wiki/Turing_complete'>Turing complete". </a>

  • Mitch Patenaude (unregistered)

    Aside from being slower and more obscure.. it's a lousy adding algorithm... it's O(n), here's one that's O(log(n)):

    // Stupid implementation of a shift-and-add
    // multiplier
    template<int firstval, int secondval>
      struct QuickMultiply
      { 
        static const int value = 
          QuickMultiply<(firstval << 1), (secondval >> 1)>::value + (secondval&0x1)!=0?firstval:0; 
      };
    

    template<int firstval> struct QuickMultiply<firstval,0> { static const int value = 0; };

    This would take QuickMultiply<4,5> and would go through

     = QuickMultiply<8,2> + 4 
     = QuickMultiply<16,1> + 0 + 4 
     = QuickMultiply<32,0> + 16 + 0 + 4
     = 0 + 16 + 0 + 4 
     = 20
    

    still stupid and unnecessary, but at least it's more efficient. ;-)

    -- Mitch

  • Mitch Patenaude (unregistered) in reply to Mitch Patenaude

    I was wondering why half the posts use escaped HTML.. it's because the forum escapes it, but the preview function doesn't.... So the Preview looks awful if you don't use HTML, and the post looks awful if you do. Somebody should fix this.

    The code is:

    Aside from being slower and more obscure.. it's a lousy adding algorithm... it's O(n), here's one that's O(log(n)):

    // Stupid implementation of a shift-and-add // multiplier template<int firstval, int secondval> struct QuickMultiply { static const int value = QuickMultiply<(firstval << 1), (secondval >> 1)>::value + (secondval&0x1)!=0?firstval:0; };

    template<int firstval> struct QuickMultiply<firstval,0> { static const int value = 0; };

    This would take QuickMultiply<4,5> and would go through

    = QuickMultiply<8,2> + 4 = QuickMultiply<16,1> + 0 + 4 = QuickMultiply<32,0> + 16 + 0 + 4 = 0 + 16 + 0 + 4 = 20

  • (cs) in reply to Mitch Patenaude
    Anonymous:
    I was wondering why half the posts use escaped HTML.. it's because the forum escapes it, but the preview function doesn't.... So the Preview looks awful if you don't use HTML, and the post looks awful if you do. Somebody should fix this. The code is: Aside from being slower and more obscure.. it's a lousy adding algorithm... it's O(n), here's one that's O(log(n)): // Stupid implementation of a shift-and-add // multiplier template<int firstval="" int="" secondval=""> struct QuickMultiply { static const int value = QuickMultiply<(firstval << 1), (secondval >> 1)>::value + (secondval&0x1)!=0?firstval:0; }; template<int firstval=""> struct QuickMultiply<firstval ,0=""> { static const int value = 0; };

    This would take QuickMultiply<4,5> and would go through

    = QuickMultiply<8,2> + 4 = QuickMultiply<16,1> + 0 + 4 = QuickMultiply<32,0> + 16 + 0 + 4 = 0 + 16 + 0 + 4 = 20



    Nah, I'll only be impressed if you use a simulated hardware multiplier using inline VHDL. ;)  There IS a way to get it to nearly-constant time.  This excersise is left up to the reader.

    But here's a hint...
    http://www.doulos.com/knowhow/vhdl_designers_guide/models/carry_look_ahead_blocks/
    </firstval></int></int>

  • (cs) in reply to Azumanga

    If you're trying to optimize multiplies by adding -- you should be using powers of 2 anyway...


    so:


    x * 6 = ((x+x)+(x+x))+(x+x) -- 3 adds not 5.


    or better yet:


    x * 6 = (x << 2) + (x << 1) + x;


    So any claim that the original code is in any way shape or form a competent optimization is wrong (even assuming it could multiply variables).


    As for the claim that integer multiplies are alreadt better than multiple additions -- well that's also wrong. From an article on optimizing x86 assembler:

    INTEGER MULTIPLYThe integer multiply by an immediate can usually be replaced with a faster and simpler series of shifts, subs, adds and lea's. As a rule of thumb when 6 or fewer bits are set in the binary representation of the constant, it is better to look at other ways of multiplying and not use INTEGER MULTIPLY. (the thumb value is 8 on a 586) A simple way to do it is to shift and add for each bit set.


    In other words, the hardware integer multiple is NOT faster than bitshifts + adds if fewer than 8 bits of the value you're multiplying are set (probably most of the time, and certainly true the you're multiplying by less than 256).

    For the full article, see:
    http://www.geocities.com/SiliconValley/2151/opts.html


  • (cs) in reply to podperson

    Quick clarification:

    a = x + x;
    b = a + a;
    c = b + c;
    6 * x == b + c; // actually 4 adds vs. 5

    The better option is (correctly):

    6 * x == x << 2 + x << 1; // 2 bitshifts, 1 add.

  • StuP (unregistered) in reply to Jester
    Anonymous:

    I bet you didn't try that, right?

    The compiler will complain because i and j are not constants. Template Parameters need to be Compiler-Time Constants



    gack.... you're right. I misread the original post... the template parameters do have to be determined at compile time...

    I guess I'd just never imagined that someone would use templates in such a WTF way! I was thinking of something like QuickMultiply<int ,int=""><int,int> and then calling <int ,int="">multiply(4,5) (or whatever that infernel syntax is that I still have to look up every time I use it 10 years later...) as a somewhat less WTF way of doing it!

    (and WTF is it with this board... All I want to do is include angle brackets in my post!!! A plain text board with no HTML wysiNwg editor would be a damned site easier!!)
    </int></int>
  • (cs) in reply to Charles Nadolski
    Charles Nadolski:
    Drak:
    Charles Nadolski:

    struct Date
    <KNUMFOOBITFIELDS>{
       unsigned nWeekDay  : 3;    // 0..7   (3 bits)
       unsigned nMonthDay : 6;    // 0..31  (6 bits)
       unsigned nMonth    : 5;    // 0..12  (5 bits)
       unsigned nYear     : 8;    // 0..100 (8 bits)
    };</KNUMFOOBITFIELDS>
    <KNUMFOOBITFIELDS></KNUMFOOBITFIELDS>
    <KNUMFOOBITFIELDS></KNUMFOOBITFIELDS>

    <KNUMFOOBITFIELDS></KNUMFOOBITFIELDS>


    I understand and like the idea of this, except for one thing... Why use 5 bits for 0 - 12? 4 bits is enough for that purpose is it not? Or am I missing the obvious (which is very possible).

    Does the compiler/runtime understand enough to place these structs one after the other to actually warrant saving 10 (or 11 if using 4 bits for year... ah is that why it is 5, to create an even size for the struct??) bits on making the year more y2100 proof?

    Drak



    People people calm down! It's only an example!  No there aren't supposed to be 13 months in a year, no there aren't supposed to be 32 days in a month, and yes, it would be a bad idea only to store 100 years.  You guys sure are jumpy today!  Obviously, you would optimized the number of bits to whatever application you have.  And yes you need to use 5 bits, if your months start at one instead of zero.

    Dude, I was calm. I was truly interested. And I still am. Why do I need 5 bits to make the number 12?

    1 bit = 0..1
    2 bits = 0..3
    3 bits = 0..7
    4 bits = 0..15   // enough for 0-11 or 1-12
    5 bits = 0..31

    Or has my knowledge of bitcalculation degraded that much after using vb.net for 3 years?

    Drak

  • Hank Miller (unregistered)

    Okay, so their CPU has a really slow multipy.   (at least for small values)   I'll belive that.

    Their compiler is appearently gcc or something like that, but they wrote the backend in house because one didn't exist for that CPU.   The way they wrote the backend broke both the optimizer, and inline assembly.   Recursive templates still work though.  

    Funtion calls turn out be surprisingly expensive - more expensize than multipy.   Inline functions are an optimization that doesn't work, of course.

    Profiling revealed this program is spending 80% of its time multitpling small constants.  And the program was too slow.

    They tried:
    #define FOO 4
    #define BAR 5
    #define FOOTIMEDBAR 15
    but as you can see when FOO changed from 3 to 4 they kept forgetting to update FOOTIMESBAR.

    It is easier to write templates to do this, than to fix the compiler, so that is what they did.

    I just knew by sleeping on this I could eventially come up with an explination.   I just wish you would tell me where this is, because I don't want to work in a company that faces thost limits.

  • (cs) in reply to Drak
    Drak:
    Charles Nadolski:

    People people calm down! It's only an example!  No there aren't supposed to be 13 months in a year, no there aren't supposed to be 32 days in a month, and yes, it would be a bad idea only to store 100 years.  You guys sure are jumpy today!  Obviously, you would optimized the number of bits to whatever application you have.  And yes you need to use 5 bits, if your months start at one instead of zero.

    Dude, I was calm. I was truly interested. And I still am. Why do I need 5 bits to make the number 12?

    1 bit = 0..1
    2 bits = 0..3
    3 bits = 0..7
    4 bits = 0..15   // enough for 0-11 or 1-12
    5 bits = 0..31

    Or has my knowledge of bitcalculation degraded that much after using vb.net for 3 years?

    Drak



    You're absolutely right.  I must have had a brainfart :-/
  • Arne Vogel (unregistered) in reply to Charles Nadolski
    Charles Nadolski:
    I actually took a look at this so called "BOOST" library.  Biggest load of crap ever.


    Did you even realize that the WTF is NOT taken from Boost?

  • Arne Vogel (unregistered) in reply to StuP
    Anonymous:
    Anonymous:
    This is utterly ridiculous for more reasons than compile time. As stated, the template only takes constants.


    Hmmmm.... hate to disagree (and this in no way justifies rewriting multiplication as a recursive addition... sheesh!) but C++ templates can be used with non-constant parameters; hence, it won't always be compiler optimised. The following could also be used:

    int i,j;
    cin >> i >> j;
    cout << QuickMultiply::value

    <i ,j="">

    Of course, it would slower than a wet week at doing a simple multiplication... but that's the WTF, not that it can only be used with constants so it would always be compiler optimised.

    <i ,j="">
    <i ,j="">



    You are sure using an interesting compiler in case it lets you get through with this, because, well, any standard-compliant C++ compiler won't.

    (In case you are reading this and are the person who wrote the forum software: Take a beginners course in gwbasic and try to work yourself up from that. Thank you.)

  • Arne Vogel (unregistered) in reply to DZ-Jay
    DZ-Jay:
    Charles Nadolski:
    ...well-written C++ is ALWAYS faster than WTF-worthy FORTRAN...


    Ha, ha, ha! You misspelled Visual Basic.  Funny! :)

        dZ.


    Oh, an anti-C++ troll. How original. I once wrote a lexer generator in C++ that emitted C++ code and, using a simple input grammar, the very first version produced a lexer, exception-handling and all, that was substantially faster than the flex-generated one.

    As far as FORTRAN is concerned: Oops. You have been blitzed ...

    See also Stroustrup94, The Design and Evolution of C++ for a discussion of the speed sacrifices Stroustrup was willing to accept for new features (hint: there aren't many ...).
  • Arne Vogel (unregistered) in reply to podperson
    podperson:
    Quick clarification:

    a = x + x;
    b = a + a;
    c = b + c;
    6 * x == b + c; // actually 4 adds vs. 5

    The better option is (correctly):

    6 * x == x << 2 + x << 1; // 2 bitshifts, 1 add.


    Actually not, because Kernighan and Richie had some very interesting ideas about operator precedence. So the compiler looks at this and interprets it as:

    (x << (2 + x)) << 1

    Which is not equal to 6 * x most of the time (except when x == 0).

  • (cs) in reply to foxyshadis
    foxyshadis:

    Charles Nadolski:
    Why would a structure NEED to have a certain layout, unless you were doing some crazy pointer arithmatic, were ignoring the sizeof() command, or abusing the MS serialization function?  Isn't that the whole point of object oriented programming is that you *won't* break an object when adding members to it?

    If I were feeling bitchy I would say, standardized wire protocols must really throw you, eh? ^_~ But I know better. I've known guys who enter a new field in version 2.36153 or something and put out a new client and server, and when the new server kills all the old clients or vice versa, their solution is to just upgrade everything! And they never tell you until you call to complain.


    That just might be the reason why I *never*, if I can avoid it, put structs or classes and data exchanges too close together. It's simply not a good idea. Memory layout and endian issues always get in the way.
    The correct solution is to write serialize methods that feeds the data to the target stream, and is capable of reading versioned data back in. Java's object serialization was broken from the very beginning for long-term storage, and the Swing developers even acknoweldge this. It's only useful for transferring data between two places that are guaranteed to run the same version of the software. Which is a rather rare case.
  • (cs) in reply to Arne Vogel
    Anonymous:
    DZ-Jay:
    Charles Nadolski:
    ...well-written C++ is ALWAYS faster than WTF-worthy FORTRAN...


    Ha, ha, ha! You misspelled Visual Basic.  Funny! :)

        dZ.


    Oh, an anti-C++ troll. How original. I once wrote a lexer generator in C++ that emitted C++ code and, using a simple input grammar, the very first version produced a lexer, exception-handling and all, that was substantially faster than the flex-generated one.

    As far as FORTRAN is concerned: Oops. You have been blitzed ...

    See also Stroustrup94, The Design and Evolution of C++ for a discussion of the speed sacrifices Stroustrup was willing to accept for new features (hint: there aren't many ...).


    I think he was actually bashing FOTRAN, because it's supposedly as easy to use as VB, only "it's faster than anything else out there"(TM), which is why it's still popular in the scientific community.  But, what's really dumb is that there are BETTER mathematical programming languages out there such as Matlab, which kicks FORTRAN's ass in terms of being closer to how math is represented and super-optimized for ginormous arrays and mathematical equations.  Our company does analysis for pavement testing software, and all I can say is that most of it is written in C++, with only the most generic mathematic stuff being linked to fortran DLL's (such as multi-dimensional regressions).  But I'd trade it for a link to a MATLAB runtime any day.  Since Blitz++ has been brought up, I might mention it next time we make analysis software... hell it would probably be useful for a gaming engine as well.
  • You're all missing the point (unregistered) in reply to Charles Nadolski

    I think it's a job security thing :) Failed attempt though :) Funny though.

  • (cs) in reply to Martin Vilcans
    Anonymous:


    I'm amazed at how often people "know" how to optimize code, often based on something they heard somewhere from someone which may or may not have been true on a 20 year old compiler. The only way to know is profiling. Looking at the compiler output is often just as good.



    The place I used to work did this all the time.  They insisted that deleting and then inserting a row into a SQL Server table was faster than doing an update.  They had done "extensive testing" before they started the app to verify this.  I whipped up a test case that showed update was faster in every case, no matter the number of records, indexes/no indexes, etc.  They didn't really have any way to counter the results of my "extensive testing".  These sorts of things were all over the company and deeply embedded in many apps.
  • wrtlprnft (unregistered)

    I think I found a more efficient and no less obscure way of doing it:

    template <int a, int b> class multiply {
    	struct v {
    		char x[a][b];
    	};
    public:
    	enum {
    		value = sizeof(v)
    	};
    };
  • ... (unregistered)

    Aaaarrgggg!!!! My eyes! This scream "class" all-round! The template (ab)use is bad enough, now he's using c's cass-work-a-like. (i don't have anything against structs, but they are better at storing/passing large amount of vars to funcs/arrays) Captcha minim yes, he should minimalize the struct-as-class use! (i do have an account here, just can't be bothered to login :P)

Leave a comment on “Becoming A WTF Believer”

Log In or post as a guest

Replying to comment #:

« Return to Article