• wellwhoopdedooo (unregistered) in reply to brazzy

    Here's an example: I took over some code that had no checks for arithmetic overflow, but just looking through it for a half hour or so I found at least three different places where it was vulnerable. Luckily, I had a nice safe integer template class available that overloaded the living hell out of everything. The end result of all of this insane overloading was that you could literally drop this class in damn near anywhere that you had a built-in integer type. I say damn near only because nothing's impossible, but I haven't had to modify a line of code to take it yet. Thanks to that, and the fact that whoever wrote it in the first place was nice enough to #define the type (but not nice enough for a typedef...) it was as easy as #including the class and changing the #define to my new type. Bada bing! Every single overflow error was now reported.

    Try that on a 100,000 line code base without operator overloading.

  • (cs) in reply to Zygo
    Zygo:
    About the only operator that you never need to overload is the comma (",") operator.
    Try this in C:
    (foo,bar)(baz,eek);
    >:D
    real_aardvark:
    a lisp flavour such as haskell
    er... what? Haskell has lazy evaluation, enforced functional purity, infix operators, strict dynamic typing... it's about as far from Lisp as you can get without actually banning parentheses.
  • (cs)

    The person who wrote that overload apparently cannot be proud of that. It also looks that the coder might be able to come up with crazy code in any programming language. It doesn't look like an attempt to be clever, more like serious lack of experience (and ability to think clearly).

    But the punchline indicates that the rest were no better.

    By the way, standard C++ list iterators support operators ++ and -- (to get to the next or previous node, duh), but not for example +=. Instead, advance() could be used. I guess it's there to make it harder for people to use a linked list for random access...

  • (cs) in reply to Iago
    Iago:
    real_aardvark:
    In essence, you are arguing that the One True Language Grail is a pure functional language (presumably a lisp flavour such as haskell)
    Haskell is not a Lisp flavour, and Lisp is not a pure functional language. Pure functional languages like Haskell are incapable of modifying code at runtime by definition - they cannot modify anything at runtime. That's what the "pure" means. <snip>
    Picky, picky. Clearly, zygo is arguing from lisp (otherwise, why quote whoever it is' tenth commandment?). I merely short-circuited the argument.

    Yes, I am well aware that Haskell is not a lisp flavour. It is, however, obviously inspired by lisp (and for all I know, CLU, and whatever). I am also well aware that lisp is not a pure functional language, and that Haskell is. And that therefore it cannot have side effects (such as, hem hem, modifying code at run-time. What a dickhead idea.)

    My point, if indeed I had one, is that this is irrelevant in a discussion about real-life commercial programming languages (much though I might like to program in Haskell. I'm more tempted by Scala, but that's by-the-by). Did I mention 99.999%? Are we talking Platonically real, or are we talking statistically real?

    And language wars, of the sort that zygo sparked off, are usually futile. I do, however, admit that zygo knows what he's talking about (cf comment on the ternary operator). Not relevant.

    In any language, the OP code is shit. I'm sure you couldn't write it in Haskell. I'm equally sure you wouldn't ever be allowed to try, at least outside academia. (For which I have a high regard, so don't bitch.)

    Well, that seems to be three points for the price of one. Bargain! Now, if only I can figure out how to get them into a linked list in C++, using an operator overload ...

  • Zygo (unregistered) in reply to JedaFlain
    JedaFlain:
    Once a problem domain reaches a critical level of complexity, the only solution available to a C++ developer is to implement a real language in C++, then implement the rest of the solution in the real language. Most people fail when they try to do this, they fall victim to Greenspun's Tenth Law, and their code ends up here.

    Doesn't the fact that you can code these "real languages" using C++ make C++ a real language by definition?

    Damn these metaphysical questions...

    If the C++ program consists entirely of an interpreter or compiler of a programming language, are the inputs to this C++ program written in C++ by definition? If so, how did we ever get the first C++ compiler? If not, can we possibly argue that the problems solved by executing the output of the C++ program when it consumes the program in the other language were solved by a program written in C++?

    Some problem domains are best solved in C++ (although that's a very small set, given the similarity between C++ and several similar languages). Some problems cost more to solve in C++ than the C++ implementation itself.

    For example you can solve the dynamic code generation problem by #include <ostream>, writing functions to text files, and generating dynamic libraries with a C++ compiler, but you might be stuck with a performance problem afterwards. Several other languages will better solve this kind of problem, and they occupy a continuum between "a bunch of expression trees in C++ objects" to "stand-alone optimizing LISP compiler".

    What often happens is that some point, your expression-tree class library becomes a separate language from its C++ implementation. If it happens gradually you might not even notice it happening--maybe your configuration file syntax or HTML templates sprouted variables and functions.

  • (cs) in reply to Devi
    Devi:
    real_aardvark:
    Zygo:
    real_aardvark:
    not to mention operator *->()

    Hmmm...that would permit smart pointer-to-member classes (analogous to smart pointer classes, but they choose a member function for you instead of an object). That opens up entire worlds of polymorphism...

    (Ooh, I wish you hadn't said that. I wasn't even thinking of the consequences of this particular overload. Now I feel sick ...)

    Has nobody here ever used smart pointers? They rely on that overload to work properly.

    http://en.wikipedia.org/wiki/Smart_pointer

    Repeat after me: with the asterisk. With the asterisk.

    Feel better now?

    Actually, I'm beginning to imagine uses for "member function polymorphism," so I'd better get back to Nurse and ask for another beaker of dramamine.

  • Jon (unregistered) in reply to real_aardvark
    real_aardvark:
    Iago:
    real_aardvark:
    (presumably a lisp flavour such as haskell)
    Haskell is not a Lisp flavour
    Yes, I am well aware that Haskell is not a lisp flavour. It is, however, obviously inspired by lisp
    I haven't been following the discussion, but I'm singling out this point. You can trace Haskell and Lisp back to the lambda calculus, but, as far as I can tell, they don't have much else to do with each other.

    I think that Smalltalk and Ruby are examples of languages that took some inspiration from Lisp.

  • Zygo (unregistered) in reply to JL
    JL:
    It's funny that we're seeing an argument here that C++ isn't powerful enough, when the article is arguing that C++ is too powerful for its own good.

    Or arguing that C++ is powerful, but not in useful ways.

  • MikeCD (unregistered) in reply to nco
    nco:
    the macro NULL does not exist in C++, you use 0
    This is true, presumably the C++ standard just assumes that any OS for which you can compile C++ is probably capable of virtual memory mapping.
  • Darien H (unregistered) in reply to myname

    I'm in the "No overloading" camp.

    While overloading may make the code easier on the eyes, the dangers it provided in terms of pitfalls are far more significant. In my experience I am far far more likely to have problems with "why the heck is this behaving this way" rather than in seperating out nested foo.add() calls etc.

    Also, consider the issues of refactoring, and code analysis, and documentation tools.

  • Jon (unregistered)

    Haskell has some pretty flexible operator overloading:

    infixl 5 <!||><   -- Define the associativity and precedence of the <!||>< operator which I just made up
    a <!||>< b = (a + b) / 2   -- Define <!||>< to compute the average of two numbers
    main = print (100 <!||>< 200)   -- Test
    Additionally, you can use any two-argument function as an operator using backtick notation:
    infixl 5 average
    average a b = (a + b) / 2
    main = print (100 average 200)

  • (cs) in reply to Jon
    Jon:
    Haskell has some pretty flexible operator overloading
    What you mention isn't exactly overloading - just introducing a new operator. Haskell's "overloading" (by which i mean polymorphism) system is pretty cool, though.

    For non-Haskellers: basically, the only polymorphism is either: accepting parameters of any type that fits a certain shape (any list, any pair where the first item is an Int, etc); or accepting parameters of any type that (to translate roughly into OO-speak) implements a given interface.

    There's no distinction between objects and parameters, so such things as multimethods fall out naturally. Similarly, this system moots any questions of such things as multiple inheritance.

    Also, i'd be inclined to generalise "average" to lists:

    average = uncurry (/) . foldr ((***succ).(+)) (0,0)
  • Jon (unregistered) in reply to Irrelevant
    Irrelevant:
    Jon:
    Haskell has some pretty flexible operator overloading
    What you mention isn't exactly overloading - just introducing a new operator. Haskell's "overloading" (by which i mean polymorphism) system is pretty cool, though. ... Also, i'd be inclined to generalise "average" to lists:
    average = uncurry (/) . foldr ((***succ).(+)) (0,0)
    Whoops, right you are. Also, I got an error trying to run your code; is *** in a module somewhere? It looks like I might be able to use it in golf contests. :)
  • Jon (unregistered) in reply to Jon
    Jon:
    is *** in a module somewhere? It looks like I might be able to use it in golf contests. :)
    Never mind; I found it in Control.Arrow. The module name might be a little too long for golfing, but I'll keep it in mind anyway.
  • Zygo (unregistered) in reply to AdT
    AdT:
    Most pitfalls are ridiculously easy to avoid anyway. Most of the danger came from the C (near) subset in conjunction with lenient compilers. C++ still allows you to do everything you can do in C, but makes it harder to do it wrong by accident (sprintf vs. ostringstream is a nice example).

    Unfortunately, C++ still has sprintf, so you still get a chance to do it wrong, accidentally or otherwise.

    Then again, I've replaced a bunch of ostringstream code with sprintf in C++ because sprintf is so much faster, especially if you're building strings at the rate of a million a second. It is nice to have nothing but a thin veneer of syntactic sugar between C++ and assembly language for that tiny portion of total development time spent on performance optimization.

    Who says I only get to argue on one side of a debate? ;-)

  • Zygo (unregistered) in reply to Gazzonyx

    [quote user="Gazzonyx"][quote user="myname"][quote user="Asd"]... This example of operator overloading is a WTF, but there are plenty of instances where it can make code more readable and does make sense. String comparison and concatenation is one of the classic examples - very infrequently do you want to know if two strings use the same reference, and if you ever do, you can just cast them to (void*) to check anyway.[/quote]

    You can do that in pure C, but I'm pretty sure that C++ won't allow it (the use of void *, that is). Then again, most compilers will probably have a flag to enable it anyways. Not sure, to be honest.[/quote]

    In C++ you can take the address of the std::string objects. That doesn't tell you much since a typical implementation of std::string stores only a pointer in the object itself.

    You can find a character pointer to data that lives in the string using data(), or construct a C string and get a pointer to that using c_str(). Either of those will give you a char * to something that lives in the underlying object, although with copy-on-write semantics even that might be surprising.

  • Zygo (unregistered) in reply to fist-poster
    fist-poster:
    By the way, standard C++ list iterators support operators ++ and -- (to get to the next or previous node, duh), but not for example +=. Instead, advance() could be used. I guess it's there to make it harder for people to use a linked list for random access...

    I once spent a month on a project that used a popular set of container classes instead of the STL ones. One "feature" of these classes is that they provide a fairly consistent interface--all the containers have an operator[], for example.

    The month started while trying to figure out why the thing was so damned slow.

    There were a few really large problems and a lot of small ones that accumulated. One was that the linked-list type provided operator[], and someone used it to iterate over a linked list using an integer index in a for loop from 0 to count(). The punch line: both operator[] and count() were O(N) operations, executing N times in a loop.

    If STL containers were used, the lack of an operator[] on std::list might have clued the developer into the fact that their algorithm was monumentally stupid.

    This problem is endemic in this class library. The UI portion of the library is full of "convenience" methods which implement operations that semantically make sense, but practically are ten to 100 times more expensive than the "inconvenient" method (which is often the same method, just called on a more appropriate object)

  • Zygo (unregistered) in reply to MikeCD
    MikeCD:
    nco:
    the macro NULL does not exist in C++, you use 0
    This is true, presumably the C++ standard just assumes that any OS for which you can compile C++ is probably capable of virtual memory mapping.

    Huh?

    C++ requires NULL (Annex C.2.2.3.1), requires NULL to be a macro (same section), and requires NULL to be a macro defined in any of cstddef, clocale, cstdio, cstdlib, cstring, ctime, or cwchar, or one of their C standard equivalents (C.2.2.3.1). C++ also requires the 18 C headers (Annex D.5, Annex C.2.1) which also define NULL.

  • Jon (unregistered) in reply to Irrelevant
    Irrelevant:
    Also, i'd be inclined to generalise "average" to lists:
    average = uncurry (/) . foldr ((***succ).(+)) (0,0)
    I've spent some time playing with Control.Arrow now; how about this? average = sum &&& genericLength >>> uncurry (/)
  • Kevin Kofler (unregistered)

    Zygo: Popular library with containers? UI portion? Was that Qt 3 by chance? If so, you'll be happy to know that this problem has been solved in Qt 4:

    • The default QList is no longer a linked list, it's an array of pointers, so operator[] is fast now.
    • QLinkedList doesn't provide an operator[]. You have to write *(linkedlist.begin()+i) if you really know what you're doing. Personally, I think the inconsistency in container interfaces is unfortunate, but it does solve the problem of programmers accidentally writing inefficient code.
  • nco (unregistered) in reply to Zygo
    Zygo:
    C++ requires NULL (Annex C.2.2.3.1), requires NULL to be a macro (same section), and requires NULL to be a macro defined in any of cstddef, clocale, cstdio, cstdlib, cstring, ctime, or cwchar, or one of their C standard equivalents (C.2.2.3.1). C++ also requires the 18 C headers (Annex D.5, Annex C.2.1) which also define NULL.

    yes, NULL is required but only for the C compatibility part of C++

  • fist-poster (unregistered)
    There were a few really large problems and a lot of small ones that accumulated. One was that the linked-list type provided operator[], and someone used it to iterate over a linked list using an integer index in a for loop from 0 to count(). The punch line: both operator[] and count() were O(N) operations, executing N times in a loop.

    Ouch... OK, [] makes no sense for a linked list, but is it really that much overhead to make count() a O(1) operation for all containers by keeping the size at all times?

  • Avi Brenner (unregistered) in reply to First

    It does make sense, but not according to the initial phrasing. If you didn't know that there was an operator overload on the CFNode class, then you would be led to assume that the code would not compile, because you would be trying to add 1 to the CFNode object, and not 1 to the pointer.

    It only makes sense when you take into account the operator overload (well it only works, it doesn't really make much sense)

  • itil (unregistered) in reply to TopTension

    "After that presentation, operator overloading was banned.

    And exactly that is the problem. How should people ever learn proper operator overloading if everyone is afraid of using them? Sure they can be abused, and often are, but same holds for macros, yeah even pointers, maybe even templates and functions. Should all ban them just because from time to time we face situations where they are abused?

    captcha: burned. Do it with the programmer, not the language.

  • Tragomaskhalos (unregistered) in reply to Zygo
    Zygo:
    brazzy:
    Zygo:
    Banning operator overloading is stupid. C++ is limited enough as it is, without intentionally ignoring its feeble attempts try to act like a real programming language.

    OK, I'll bite: what is, in your opinion, a "real programming language"?

    [Reams of LISP-weeniedom expunged, terminating, inevitably, with Greenspun's Tenth Law]

    Unless you have not encountered the species before, it was fairly obvious from his first comment that Zygo was a LISP weenie. These guys think that the rest of us, with our piffling real world approaches to real world problems, are ignorant stumbling troglodytes who are ignorant of the One True Religion.

  • (cs) in reply to Jon
    Jon:
    Irrelevant:
    Also, i'd be inclined to generalise "average" to lists:
    average = uncurry (/) . foldr ((***succ).(+)) (0,0)
    I've spent some time playing with Control.Arrow now; how about this? average = sum &&& genericLength >>> uncurry (/)
    That one's got the advantage that it's obvious what it does (to someone familiar with Haskell and Control.Arrow, anyway), but it has to traverse the list twice -- once in sum, and once in genericLength.

    I thought of something much to the same effect -- "uncurry (/) . (sum &&& length)", I think was what I was thinking -- but I thought that gratuitous premature optimisation / obfuscation would be more entertaining. It's the JAPH approach to averages!

  • Asd (unregistered)

    Java Collections beat the crap out the STL quite handily without any need to operator overloading. >> for streams is completely unnecessary (how do people manage with printf?).

    The sensible uses for operator overloading always seem to come down to maths. So either we need built in Vector, Matrix, Imaginary etc. types or a special Numeric interface that the compiler will automatically add operator overloads for.

  • Asd (unregistered)

    Oh, and doing maths in Lisp with prefix notation is far, far worse than even Java style method calls.

  • bramzy (unregistered) in reply to Zygo
    Zygo:
    brazzy:
    Welbog:
    I take it you've never wanted to multiply matrices before. Operator overloading turns that hell into something wonderful.
    Actually, no, I haven't had to work with matrices anytime recent. If I had to, I don't see what's so hellish about using methods.

    Actually, for matrix work I prefer to use overloaded functions. "Matrix add(const Matrix &M, const Vector &V)" can be overloaded with "Matrix add(const Matrix &M, const Matrix &M2)" and all the other supported combinations. To use it with some objects "V1" and "V2" you just write "Matrix M = add(V1, V2)" without caring what types V1 and V2 are.

    I do hope you know you can overload operators too. Judging by your use of smart pointers, I guess you do.

    Btw, what exactly does adding a vector to a matrix do?

    Replacing a + b by add(a, b) does not add any information about what the function does. Neither does a.plus(b). operator-overloading-haters just seem to forget that the key here is using_function_names_that_reflect_their_operation. Whether that function name is "add", "plus", or "+" doesn't matter at all.

    OPERATOR OVERLOADING ABUSE == FUNCTION NAME ABUSE

    captcha: they're just pirates!

  • ali (unregistered) in reply to bramzy
    bramzy:
    operator-overloading-haters just seem to forget that the key here is _using_function_names_that_reflect_their_operation_. Whether that function name is "add", "plus", or "+" doesn't matter at all.

    OPERATOR OVERLOADING ABUSE == FUNCTION NAME ABUSE

    Couldn't agree more. The conclusion may be that the fear of operator overloading is a purely emotional one, probably coming from feeling uneasy when things look like math.

  • fist-poster (unregistered)
    (how do people manage with printf?)
    With errors?

    Unlike other operators which are just conventional, << and >> carry visual information (towards left, towards right). With streams this is perfect mnemonics.

    Is it possible that people who consider this overload ugly have a tendency to abuse the "real" shift operators (for multiplying by powers of 2)?

    In C++, by the way, most of the operators have very different meanings depending on context (* and & for example). Why hasn't this been brought up?

    And those that like to use left-shift for multiplying are already overloading the meaning of the operator: what is a bitwise operator is used as an arithmetic operator.

  • Jon (unregistered) in reply to Irrelevant
    Irrelevant:
    Jon:
    average = sum &&& genericLength >>> uncurry (/)
    ...I thought that gratuitous premature optimisation / obfuscation would be more entertaining. It's the JAPH approach to averages!
    I guess that's why they call it "pointless" style. :)
  • (cs)

    Too bad this thread erupted into a language war.

  • Samuel Carlsson (unregistered)

    My eyes! My eyes...

  • (cs) in reply to SuperousOxide
    SuperousOxide:
    I'm not sure the poster's logic "Knowing that almost every pointer value plus 1 will result in a non-NULL pointer" makes sense. Since it's a pointer to a class, you don't automatically know WHAT the pointer value plus 1 will equal. The correct objection is that CFNode + integer = pointer, which is not the way arithmetic operation usually work.

    Actually, the poster's logic is correct for every pointer value except one: 0xFFFFFFFF. Doesn't matter what the starting pointer is, object, integer, string, whatever; a pointer is just a memory address, and anything but 0xFFFFFFFF + 1 (including starting with a NULL pointer) will result in a non-NULL pointer.

    Your objection is valid as well, of course.

  • fist-poster (unregistered) in reply to KenW
    SuperousOxide:
    The correct objection is that CFNode + integer = pointer, which is not the way arithmetic operation usually work.

    Not the way pointer arithmetics works, yes, well spotted! type + integer = pointer_to_type, huh!

    KenW:
    Actually, the poster's logic is correct for every pointer value except one: 0xFFFFFFFF. Doesn't matter what the starting pointer is, object, integer, string, whatever; a pointer is just a memory address, and anything but 0xFFFFFFFF + 1 (including starting with a NULL pointer) will result in a non-NULL pointer.

    I don't follow you here. A pointer is not just a memory address. Pointers can also be valid or invalid in C++. If you add an offset to a pointer, the obtained pointer is valid only if it still points to into the same type (e.g array). Otherwise using it is undefined behaviour. So I can't see how you can walk into a null pointer (in a sequential array) without going past the end of the array and having undefined behaviour first. The compiler must be broken if it generates code where it is impossible to determine if a pointer is a valid pointer to an array element or a null pointer.

    I would also expect NULL + 1 to be undefined as NULL cannot belong inside your array.

    But this thread is about linked lists. Increment operator could be used for a linked list iterator (arguably), addition (especially if it always adds 1, regardless of the other operand is absurd.

    Could it be that the coder decided to overload +, as his twisted logic required a strange test for loop termination. + 1 is used twice in the for statement: once without and then with side effects. So it seemed that ++ operator wouldn't solve the problem, as it cannot be called without side effects (unless it is a side-effect-free increment :)).

    The whole strangeness of the logic may be related that the 1st and last node are not traversed. Could it be it's that kind of an implementation that uses those as some sort of invalid sentinels (may-be to avoid some special case coding - list is empty?) that one can see in programming boards. I don't know, may-be it's a reasonable implementation, but then this should be packed into an interface where the user of the list can iterate it from first to last without noticing any difference...

  • BrainDigitalis (unregistered) in reply to ali
    ali:
    p.p.s.: The real WTF is that the author thinks that NULL must be 0. Is not true.

    Actually the C++ spec says that NULL must be 0, i believe.

    captcha: paint

  • AdT (unregistered) in reply to Zygo
    Zygo:
    C++ is little more than a macro-driven assembler designed by a committee.

    Why do all the C++ haters have to resort to inaccurate polemics? Must be because they don't have real arguments. If I was to criticize LISP on the same level that you criticize C++, I would have to call it "Lots of Superfluous Irritating Parentheses", wouldn't I?

    Anyway, templates are not macros - macros operate on the syntax level, while templates operate on the semantical level. True, macros were used before templates were introduced, but people realized it was a hack to use syntax patterns where they should have used semantical patterns, and thus, templates were born.

    Second, you don't have the slightest clue what an assembler is. I didn't use assembler very often, but often enough to know that C++ is nothing like that.

    Third, it's designed by a committee. Whoa, what an accusation! Perl 6 is supposedly designed by a benevolent dictator, and see what it turned into. At least it doesn't fit your "syntax description fits on a page" definition any longer. But then, as another user suggested, all you wanted to say was that LISP is the One True Programming Language, anyway.

    Zygo:
    forces developers to cope with--or at least be careful to avoid--register-level execution details while they're trying to build an e-commerce site.

    And more clueless tripe from a LISP programmer with the inferiority complex typical to LISP programmers. I feel so sorry for you that almost no one uses LISP outside of AI. C++ does not force you to deal with processor registers in any way, on the contrary, apart from the (obsolete) "register" keyword (which is a mere hint to the compiler), and inline assembler (which is not C++) you cannot even access registers. Well-written C++ will work on any processor from x86 over SPARC to ARM and can even be compiled into MSIL.

    Zygo:
    The STL is nice, but the STL existed for some years before many C++ compilers were able to compile it.

    Oh, wow. How many shitty LISP implementations are there? Compilers not being able to handle the STL - that's so long ago, but it's interesting that you'd care to remember that just so you can include it in today's smear.

    Zygo:
    Otherwise, C++ is a bit of a bastard child--real systems work gets done in C, real application development gets done in non-C++ languages (or it gets done in C++ at 10 times the development cost or bug count on delivery).

    Total nonsense. There is practically no reason to use C anymore even if you use C++ (or a subset thereof) only as better C. Huge and successful applications have been built using C++ and component frameworks. 10 times the development cost and bug count is a pure lie that ignores the fact that a) given some care the usual pitfalls can be easily avoided and b) there are some pitfalls in every language. I have once worked on an 80,000+ line Java application. I have seen memory leaks. I have seen premature object destruction. Both things that clueless Java zealots claim happen all the time in C++ (they don't) and that cannot happen in Java (they can). It's just your irrational disdain of any programming language that's more successful in the market than your own favorite that makes you utter these foolish claims.

    Also, there may be some additional risks involved when writing C++ code compared to Java, but let's not ignore the fact that Java does not merely take risks from you, it also strips you of chances. That's the necessary consequence of using a language that is designed to strip programmers of a large part of their freedom because supposedly the stinking compiler is smarter than you. But those who claim that xyz is so RAD and C++ is so SAD typically ignore that while some simple things are simpler in Java than in C++, many complex things are insanely more complex to do in Java or outright impossible. The same applies to LISP. They also tend to ignore the fact that while some features are dangerous to use, there is almost always a way that is both safer and more convenient in modern C++ (sorry, I cannot assume that you have even the faintest clue what that is, but this is the commonly used term). Like memory management. Do you know how often I use "delete" in my own programs? Almost never. Almost everything that I create with "new" (which I also need very rarely) immediately goes to a smart pointer and that's that.

    Posted using Firefox (large C++ application).*

    *) I know, any idiot can write a browser in LISP in 10 minutes that beats the shit out of Firefox. There are at least 1 billion idiots on Earth, so you shouldn't have a hard time finding a volunteer. I'm very interested in the results. The countdown starts now...

  • AdT (unregistered) in reply to fist-poster
    fist-poster:
    By the way, standard C++ list iterators support operators ++ and -- (to get to the next or previous node, duh), but not for example +=. Instead, advance() could be used. I guess it's there to make it harder for people to use a linked list for random access...

    The STL defines the random access iterator concept which requires that +, -, += and -= are O(1) (amortized). I personally consider this overly strict since it precludes the use of trees - in my opinion, this should have been:

    operator +(iterator iter, size_type m) is O(log(1 + abs(m))), etc.

    But no matter whether constant or logarithmic time is required, list iterators cannot keep either promise - they are O(m). Therefore, their iterators are not random access iterators and do not provide +, - etc. to avoid confusion. That's why you have to use std::advance instead (or std::distance if you want to "subtract" two iterators). The good news is that std::advance can be used on random access iterators, too, and will be a constant time operation then. A template specialization takes care of that.

  • AdT (unregistered) in reply to Zygo
    Zygo:
    Or arguing that C++ is powerful, but not in useful ways.

    A coworker of mine wrote a working C preprocessor prototype in C++ using Boost Spirit in 2 days. I call it a prototype because it wasn't standard-conforming (nor designed to) - the C99 standard being a 1000 page plus document, after all, and 2 days are not quite enough to approach it.

    To speed up the discussion, I will play the LISP's advocate and respond in your place:

    "Well, a C preprocessor is not useful anyway because C is not a Real Programming Language (tm). As you said, it takes over 1000 pages to document, and that's at least 999 pages too many. Another excellent reason why C is not a real programming language is that it's not LISP. And since the program you mention was a prototype, it was never meant to be used in production, making it even less useful than it would have been otherwise.

    You seem to think that 2 days would be a short amount of time for writing a C preprocessor. However, if he had used LISP, he could have done it in less than 3 hours and taken the rest of the day off to go swimming. I am not obliged to provide any evidence for this claim."

  • Zygo (unregistered) in reply to fist-poster
    fist-poster:
    And those that like to use left-shift for multiplying are already overloading the meaning of the operator: what is a bitwise operator is used as an arithmetic operator.

    Just for laughs one day, I read the Pentium 4 optimization manual (not all of it, just the summaries in the first few chapters and one or two detailed areas for reference).

    One of the performance tips: Replace arithmetic left shifts with sequences of up to 3 adds.

    How the hell can 3 adds be faster than a bit shift? Apparently there are hundreds of adder units on the chip available for parallel execution, but shifters are relatively scarce. This makes a kind sense: every member access on a C++ object or access to an automatic variable or parameter on the stack implies addition. Except in specialized application domains, addition is by far the most common operation (even more common than memory fetch and store) in typical CPU workloads. The weird thing about the Pentium 4 is that most CPU's put dedicated adders in or near the chip areas that use them, while the P4 has a pool of them available for general use.

    So on a P4, code that says:

    b = a << 3;
    

    is slower than:

    tmp = (a + a);
    tmp2 = tmp + tmp;
    b = tmp2 + tmp2;
    // or just b = a + a + a + a + a + a + a + a;
    // which with CSE optimization is the same machine code
    

    because although the shift occurs in a single clock cycle, the shifter might be busy and several clock cycles could pass before the shift can be executed. On the other hand, the adds can start executing immediately and in parallel with other add instructions.

    At the end of the day it's all a bit silly though. Pentium 4 performance was generally awful except for a few strange corner cases like this, and most other CPU's (including previous Pentium versions) can shift faster (sometimes much faster) than they can do multiple adds.

  • (cs) in reply to fist-poster
    fist-poster:
    Ouch... OK, [] makes no sense for a linked list, but is it really that much overhead to make count() a O(1) operation for all containers by keeping the size at all times?

    It's a feature; rather than bloat all containers with bookkeeping, they instead give you different containers with different use and performance characteristics to best suit your needs. It makes the STL very fast, but means you can't always be sloppy about the choice of container you need. 15-20 years ago, when STL was designed, performance was still an overriding concern (and still is, of course, but now "scalability" seems to be the most serious performance concern, which STL doesn't really address).

  • Zygo (unregistered) in reply to fist-poster
    fist-poster:
    I don't follow you here. A pointer is not just a memory address. Pointers can also be valid or invalid in C++. If you add an offset to a pointer, the obtained pointer is valid only if it still points to into the *same* type (e.g array). Otherwise using it is undefined behaviour. So I can't see how you can walk into a null pointer (in a sequential array) without going past the end of the array and having undefined behaviour first. The compiler must be broken if it generates code where it is impossible to determine if a pointer is a valid pointer to an array element or a null pointer.

    It's a dirty little secret: C++ on most CPUs wastes at least 1 byte of address space at the beginning of the range of pointer types. You can't put an object there, because "address of object" must be distinguishable from NULL.

    Most compilers reserve, or prefer not to allocate, the first N bytes of memory, where N ranges from 1 on microcontrollers to megabytes with special debugging compile options. Often a memory fault occurs if you attempt to access not just NULL, but small offsets from NULL as well (so accessing "pFoo->member123", which might be at NULL + 43 bytes, still causes a memory fault).

    On really small microcontrollers things tend to drift a bit from the ISO C standards. One microcontroller I used had 256 bytes of memory that was 60% faster than the other 63.75K of memory--it could be accessed with 8-bit addresses and a special set of instruction opcodes. You could ask the C compiler to allocate static and global variables there with non-standard storage class specifiers. Since the 256 byte area is typically at the lowest memory addresses, the first variable allocated there always had the address NULL. This wasn't usually a big deal--the CPU instruction set did not provide for pointer accesses to these variables anyway, so if you were taking the address of one of these variables you were using slower CPU instruction opcodes and negating the speed advantage of the storage location. This memory area was typically used for local temporary variables in time-critical, non-reentrant loops, or global state variables that were accessed very often throughout the program--uses that do not involve pointer access to the variables (although the variables themselves often were pointers). The linker inserted a byte before the first byte of BSS if required, so "normal" variables (with ISO standard storage classes) never had NULL addresses.

    A convenient way to avoid the "one past end of array must be a valid address" problem is to put program text at the top of the address space. Since you can't do pointer arithmetic on pointers to functions, nor have arrays of functions, the whole "0xFFFFFFFF + 1" problem can't arise (at least not within the limits of well-defined C++ programs).

    Most compilers and operating systems don't bother with these any more, they simply don't allocate the first and last pages of the address space, so no objects can be found near the boundary conditions of pointer arithmetic.

  • Richard (unregistered) in reply to Zygo
    Zygo:
    The biggest clue that you have a real programming language is that you can 1) write a function body in the argument list of a function,

    How does that make something a real programming language? when did closures become /the/ way programming languages are meant to behave? First- or second-order function do not a programming language make.

    Zygo:
    Real programming languages can generate, inspect, or modify existing code at runtime, including (within reason) the code of the language compiler and runtime library.

    You want the world to run on interpreted languages? Go back to Smalltalk or Lisp you hippy!

  • Zygo (unregistered) in reply to Tragomaskhalos
    Zygo:
    [Reams of LISP-weeniedom expunged, terminating, inevitably, with Greenspun's Tenth Law]

    As a language war grows longer, the probability of a reference to Greenspun's Tenth Law or LISP approaches one.

    (with apologies to Mike Godwin)

  • (cs) in reply to Zygo
    Zygo:
    JL:
    It's funny that we're seeing an argument here that C++ isn't powerful enough, when the article is arguing that C++ is too powerful for its own good.

    Or arguing that C++ is powerful, but not in useful ways.

    Good. We've gone beyond "real," not that anybody has set forth standards for what "real" might be, other than perhaps Turing-complete.

    Perhaps you'd like to give us a parameterized version of what "useful" might be?

  • Zygo (unregistered) in reply to AdT
    AdT:
    Anyway, templates are not macros - macros operate on the syntax level, while templates operate on the semantical level. True, macros were used before templates were introduced, but people realized it was a hack to use syntax patterns where they should have used semantical patterns, and thus, templates were born.

    Templates are macros. Well, Turing-complete macros. OK, maybe not macros per se. Or macros at all.

    Templates eventually solidify into assembly language code, then disappear. It is only possible to use the final outcome of template evaluation in the C++ program, which ties one hand behind your back for solving important classes of problems.

    AdT:
    Second, you don't have the slightest clue what an assembler is. I didn't use assembler very often, but often enough to know that C++ is nothing like that.

    Actually, I've used a number of assemblers and a number of assembly languages from 8-bit microcontroller to 64-bit workstations.

    AdT:
    C++ does not force you to deal with processor registers in any way

    Absolute rubbish.

    C++ is defined in terms of an abstract machine (1.9) and leaves a lot of parameters (e.g. sizeof(int)) up to implementors to define in ways that exactly match the physical hardware. Most commercially available CPUs today are designed to match the characteristics of the abstract machine efficiently.

    The behaviors of the C++ intrinsic data types are chosen based on a CPU design from the 1970's, and can cause significant problems in problem domains where numbers behave differently. To C++'s credit, it does provide a mechanism which often leads to a solution to this kind of problem that many other languages don't have, but other languages usually don't have this particular problem to start with.

    Pointers, references, and iterator invalidation semantics in the STL all have to be used with care so that the developer doesn't interfere with the details of the implementation or silently exceed the bounds of defined behavior.

    C++ provides an optional thin veneer over some of these details, if you refrain from using libraries supplied by your OS vendor and unenlightened third parties (<cough> Trolltech <cough>), and create classes to handle your problem domain's specific numeric semantics. Maintaining that discipline forces a developer to do a lot of extra thinking (or another developer to do enough thinking for two, in mixed-skill-set development teams), and this reduces productivity.

    AdT:
    Well-written C++ will work on any processor from x86 over SPARC to ARM and can even be compiled into MSIL.

    Exactly. If you carefully step around all the little mounds of earth, you won't be killed while dancing through a minefield.

    This is OK if you're writing an operating system, or the internal VM engine of a video game. It's a bit annoying when the code is meant to be an e-commerce application or an authorization module.

    Zygo:
    The STL is nice, but the STL existed for some years before many C++ compilers were able to compile it.

    ...and came from an ex-LISP hacker, too...or at least he sounds like one in interviews. He struck me as extremely unimpressed by the whole object-oriented fad.

    AdT:
    10 times the development cost and bug count is a pure lie

    Actually it's from direct observation. I have seen a number of instances where a C++ application is replaced with (or is simply displaced by) an application written in some other language at between 10% and 50% of the cost of developing the C++ application. In many instances the applications are written by different groups, so the replacement application isn't getting built with knowledge gained from the application it replaces.

    When the process happens in reverse, the opposite ratio arises.

    The issue seems to be that on average, developers are able to produce N lines of working code per year, and this number does not depend on the language they use; therefore, if your language allows you to do more with less code, on average you produce code more efficiently (cheaper, faster, better--pick two, but at least you get the choice of two).

    AdT:
    I have once worked on an 80,000+ line Java application. I have seen memory leaks. I have seen premature object destruction. Both things that clueless Java zealots claim happen all the time in C++ (they don't) and that cannot happen in Java (they can).

    If C++, C#, and LISP zealots agree on anything, they agree on laughing at Java developers. :-)

    Perl developers take satisfaction in learning that they're not alone after all (the number of times I've had to adjust Perl scripts to avoid segfaulting crashes in core, or try to find a multi-gigabyte memory leak...).

    AdT:
    Do you know how often I use "delete" in my own programs? Almost never. Almost everything that I create with "new" (which I also need very rarely) immediately goes to a smart pointer and that's that.

    Sigh. I do that as well. I can build smart pointers with a variety of exotic semantics in my sleep (although most of the time I just reuse two or three templates). I haven't used delete outside of a smart pointer class since the years started with an odd digit.

    Unfortunately it seems I have to beat other people around the head with a stick before they'll do the same. Often those people are third parties or vendors who will respond to no amount of head-stick-beating.

    OK, that's enough C++ bashing for now. Next week, I'll be proving that C++ is better than any language except Java. Stay tuned!

  • Zygo (unregistered) in reply to AdT
    AdT:
    Zygo:
    Or arguing that C++ is powerful, but not in useful ways.

    A coworker of mine wrote a working C preprocessor prototype in C++ using Boost Spirit in 2 days. I call it a prototype because it wasn't standard-conforming (nor designed to) - the C99 standard being a 1000 page plus document, after all, and 2 days are not quite enough to approach it.

    To speed up the discussion, I will play the LISP's advocate and respond in your place

    (pause while I ROFLMAO ;-) The other dozen or so languages I use every month will be offended if I get a reputation as a LISP advocate.

    I did do a mostly-working C preprocessor parser (supporting // and /* */ comments, #define, #include, #if, #ifdef, #ifndef, #else, FILE, LINE, macros with and without arguments, recursive expansion, # and ##, respecting string and character literals and backslash) years ago in under 5 hours in bare Perl code (in other words, starting from the standard language runtime and an empty text file, and a two-page description of the grammar). This happened during a programming contest, during which contestants could choose the implementation language for each problem from choices including Perl, LISP, Tcl, MATLAB, and C++. I picked Perl for the preprocessor problem because if Perl is good for only one thing, that thing must be text processing using simple grammars.

    The first time I participated in that kind of contest, I assumed that since I knew C very well, and none of the other languages, then I'd just do everything in C. That strategy worked on every contest I'd participated in until that date--but each of those contests allowed only C, C++, or Pascal, which for all practical purposes are the same language with different syntax. In this contest, I watched someone in the "amateur individual" category answer the problem questions with a score that beat not only all other amateur individuals in his own country, but all other categories up to "professional 3-person team" from any country. The strategy he used was simple: pick the programming language which provides the best developer productivity for the problem domain. It worked: he utterly defeated some very strong competitors by a margin of hours. The second and third place winners, who used one or two languages, were minutes apart.

    Choosing the right language for solving your problem is as important as choosing the libraries your program depends on and the operating system it runs on--it can affect developer productivity by orders of magnitude. This is something that is difficult to get across to people who have learned only one or two programming languages, especially very similar languages like C, C++, Java, and C#--they simply don't believe that it can take 10 times longer to implement a solution to a particular problem in their favorite language than in one they have never used, although for some reason they accept the reverse without question.

  • Zygo (unregistered) in reply to Richard
    Richard:
    You want the world to run on interpreted languages? Go back to Smalltalk or Lisp you hippy!

    Lisp and Smalltalk are not necessarily interpreted. Compilers do exist for both. The trick is that they keep around the parsed expression trees after generating the machine code (something most C++ compilers don't do). When you have LISP code examining itself, you are actually iterating over the data structure that was compiled into binary machine code. If you generate a new expression tree structure then you can get it compiled by the runtime library and execute it. People used to write whole operating systems in LISP and Smalltalk this way.

    This technique is still used today in a number of big production systems, especially where search queries are user-generated and fairly complicated (much more complicated than most SQL implementations), and the data set fits into a high performance server's RAM. Compiled LISP machine code translates the query into LISP data, then calls a function to turn the LISP data structure into a binary blob of machine code, then executes the binary blob, then frees the memory for the blob.

  • Zygo (unregistered) in reply to real_aardvark
    real_aardvark:
    Perhaps you'd like to give us a parameterized version of what "useful" might be?

    I meant to summarize the entire flamewar in a single sentence.

    Obviously operator overloading is a powerful tool (if nothing else, it generates vigorous flamewars ;-). If OO didn't have some useful purpose that couldn't be implemented as effectively in another way, it would have never have survived review by the ISO committees. Arguments have been presented across the spectrum ranging from 100% for to 100% against OO. So far, nobody has disputed that OO is a C++ feature. ;-) So at the end of the (second) day, the usefulness of a powerful C++ feature is clearly in dispute.

    I'm not really trying to say LISP is better than C++ (which seems to be how many people are characterizing my position). What I am saying is that C++ still lacks support for important problem domains (with a few examples of such domains from the LISP world, but there are others from languages I know less well than LISP which I can't talk about with as much depth). C++ did not always have templates (and it took some time for them to mature to the point they have reached now), yet C++ today is much better with templates than without.

    Why is it so strange to think that C++ is still missing important, useful features? Was God a member of the ISO committee, making sure they got it right for the first and last time? Does anyone really use C++ for years at a time and never think, "that was a really painful module to code, I wish it was easier to do that"?

    I have done a lot of work with EDA tools where the typical working day can involve writing or debugging software in three or more languages, and a choice I almost always have to make is "which language do I implement this in" and the answer is almost never "C++" unless there is choice at all, because all the other languages are much better adapted to their parts of the EDA landscape. It doesn't help that few people who work in these domains can actually understand C++ (never mind they might spend their days designing the hardware C++ runs on).

    Intentionally avoiding tools that C++ already provides reasonably well (like operator overloading) seems spectacularly silly to me, when C++ would be even more painful to use without it. If the discussion were about banning multiple inheritance, all the same arguments would apply for ("damn it's confusing with all these ambiguous members!") and against ("MI vs. extreme pain in a few specific problem domains: pick one"). Pick any other C++ feature, same flamewar. C++ really needs most if not all the features it has now, and more besides.

Leave a comment on “Yet Another Operator Overloading Abuse”

Log In or post as a guest

Replying to comment #:

« Return to Article