• someone (unregistered) in reply to Zecc

    Everyone can do that with C++

    But with languages like Pascal, you have no chance

  • History Teacher (unregistered) in reply to Severity One
    Severity One:
    History Teacher:
    TRWTF are coders who don't understand pointers, yet touch any language less separated from hardware than crossplatform HTML+javascript.
    Oh, I understand pointers, but they are too easy to muck up, especially with C++'s Byzantine syntax, and an object-oriented language offers enough tools to pretend that pointers do not exist (such as Java does). Unless you're doing serious low-level or high-performance computing, there isn't much use for pointers.
    Fair 'nuff. But even Java handles are pointers, Java is just really good at preventing you from having invalid values. Pointer is what makes a von Neuman architecture computer (or virtual machine), it's not a WTF.

    But yeah, first post and all that, never mind :)

  • History Teacher (unregistered) in reply to Zylon
    Zylon:
    pitchingchris:
    I disagree. There are plenty of useful things you can do with operater overloading.
    But is there anything you can do with operator overloading that you can't do without it? Y'know, other than making your code harder to decipher for maintenance programmers? Operator overloading always seemed like self-indulgent syntactic sugar to me.
    Do not underestimate syntactic sugar. There are concepts which make no sense to incorporate into the core language, yet natural way of operating on them is with operators, and being forced to use method call syntax creates just horrible looking, hard to understand code.

    But operator overloading in certainly something, which should not be used without careful design. As such, using it should be limited to well-documented libraries, and in actual application they should be used only as documented in the library. If it's not worth designing and documenting that carefully, then better use normal methods with self-documenting names. In C++, same applies to templates.

  • Michael (unregistered) in reply to rjb
    rjb:
    I'm not going to lie, though I'd never append a null to the end of a string just for shits and giggles, programming in C++, especially with third party libraries, is often an exercise in frustration with regard to pointers, and the path of least resistance is to just try each pointer/object operator and go with whatever compiles properly.
    Oh dear! I have often suspected many people program that way, but (as with ANY OTHER PROGRAMMING CONCEPT) if you don't understand it, don't use it. If you NEED to use it, LEARN it!!

    Pointers are not hard (except when you first start - but I suspect this is largely just because they overwhelm people. One day you have that 'aha' moment where suddenly the difference between * and & (and . and ->) make sense. If this doesn't happen within a month of using C/C++, get a job using a different language....).

  • Urad (unregistered) in reply to Severity One
    Severity One:
    History Teacher:
    Severity One:
    TRWTF is pointers.
    ...are you the author of this particular representative line, by any chance?
    No.
    History Teacher:
    TRWTF are coders who don't understand pointers, yet touch any language less separated from hardware than crossplatform HTML+javascript.
    Oh, I understand pointers, but they are too easy to muck up, especially with C++'s Byzantine syntax, and an object-oriented language offers enough tools to pretend that pointers do not exist (such as Java does). Unless you're doing serious low-level or high-performance computing, there isn't much use for pointers.
    You just made me cry.

    I work with people all day who have never seen anything other than C# and/or Java. In itself, this is no biggie, but I cringe at how wastefully they create Objects simply because they don't seem to understand the work being done under the hood (whatever people think about C or C++ having to explicitly create and destroy objects (or allocate/free memory in C's case) gives you an understanding that there is effort involved. Simply forgetting about a reference we don't need any more promotes an attitude that the resources don't matter). Granted, you don't have to understand pointers per se' to understand the work being done, but I think it is important that people do at least a semester or two in a low-level language so they understand what is actually going on. I think one of the real problems with many of today Graduates in IT disciplines is that they are too willing to let the compiler optimize away their bad habits.

    One of the examples I saw recently was a 'programmer' who implemented a small bitfield (4 bits) as a String in C# (and I was a bit concerned when I pointed out it might not be the best way to do it, a more senior programmer disagreed).

    A common argument I hear is "It doesn't matter, because we have so much memory and/or CPU available". When we're making a Desktop Sudoku solver this is probably true. When we're making an app that has to support multiple users querying a large database and returning results in realtime then you start to realise the importance of minimising resource consumption. (More frighteningly, when we had serious performance issues after a poor change on a fairly reasonably specced server, I was appalled that the common voice (even from people within IT) was "Throw more CPU at it")

  • AGray (unregistered) in reply to java.lang.Chris;
    java.lang.Chris;:
    Sadly, there's a lot of odd code that a C++ compiler will happily compile:
    [chris@titwank ~]$ cat ouch.cpp 
    #include <iostream>
    #include <string>
    
    using namespace std;
    
    int
    main()
    {
        string s = "Are we having fun yet?";
        int value = 0xdeadbeef;
        cout << s.c_str() + value << endl;
        return 0;
    }
    [chris@titwank ~]$ c++ -o ouch ouch.cpp 
    [chris@titwank ~]$ ./ouch 
    Segmentation fault (core dumped)
    

    I am TRWTF - I do not follow (exactly) what's happening here. I can see the bit where that hex offset is added to the output stream. I'm not sure how this causes a Segmentation fault...explanation please?

  • AWP (unregistered) in reply to Jack
    Jack:
    Dave-Sir:
    However, because of precedence rules
    Precedence? WTF is precedence? That looks like a Very Big Word. I should not have to know it. I should not have to know anything. The computer should just do what I want.

    Programming languages are TRWTF. Nobody wants to type commands. So boring. And hard, dammit! There should just be a big round candy colored button and when I click it, it reads my mind and Just Works. If you can't design something to cater to my fantasy, you are no good.

    we're getting there....ESB's and all that rot that seem to have taken LEGO Mindstorms idea of drag and drop programming so that any idiot can make an application.

  • Nobody (unregistered) in reply to rjb
    rjb:
    Perhaps I misworded what I said. What I meant was compiles properly, and exhibits the desired behavior. If it doesn't exhibit the desired behavior, then I wouldn't say it compiled properly, it just compiled. :p
    Er...so did the OP's - or so it seemed. How do you know if it exhibits the desired behaviour? Just because something appears to work, does not mean it is working. In fact, if we play with memory the wrong way we'll often create errors that occur some of the time but not all of the time (because at each run the memory being used is different - sometimes it may be empty, other times not). Playing the "I just want it to compile" game can be very dangerous - even if you think you are verifying the behaviour.
  • a;skur (unregistered) in reply to Steve The Cynic
    Steve The Cynic:
    java.lang.Chris;:
    Seeing that attempt at "string concatenation" leaves me with the feeling that Benedikt's predecessor may have been a Java programmer.
    As opposed to a colleague of mine at a previous job of mine, who had been doing too much work with JavaScript.

    So, to present a message like "Elements found: {number}", he would exploit JavaScript's unnatural fascination with automatic type conversions:

      var value = calculationFunction( parameters );
      var message = "Elements found: " + value;

    This doesn't work so well in C++.

    Ooh, I know, C++ doesn't use "var" ;)
  • johnno the wonderful (unregistered) in reply to Cbuttius

    [quote user="Cbuttius"]Using the null character to denote end of string has its advantages but can be a WTF at times.

    Prefixing the length means it is faster to calculate the length of the string but makes it less maintainable. Firstly the potential length of the string is limited by the length of its header, and secondly when you modify the string you have to modify this length.

    In addition, there is the method of "tokenising" a string that into lots of strings by modifying the "separator" character into a null, allowing each component to be a string on its own. In fact sometimes you use a double zero to indicate the end of the sequence.

    Of course for binary "blobs" there will often be embedded null characters but in general these are not modified or lengthened the same way strings often are.

    The real WTF with C++ and strings is that the standard string is:

    1. really a typedef of template basic_string<char, char_traits<char>, allocator<char> >
    2. No standard ABI so if your library uses std::string in its interface it isn't guaranteed to work with a different library that uses a different compiler.

    It wasn't that long ago that you couldn't even pass a std::string between DLLs / shared objects built with the same compiler.

    All of which leads to you finding lots of "own implementations" of string, especially in legacy code.

    Of course, legacy code is still around because people still use these projects that are being maintained and they brought in enough money to the business.

    [/quote]

    [quote=lucidfox] TRWTF is rolling out their own string implementation.

    You know, as if C++ doesn't have enough of these already. [/quote] uhm.....

  • hear hear (unregistered) in reply to Stev
    Stev:
    TRWTF is people complaining that a language "lets you shoot yourself in the foot". A gun will let you shoot yourself in the foot, much like a hammer will let you smash your thumb and a saw will let you cut your fingers off. C++ is a tool like any other and will cause a bloody mess if used without proper training or care, or if used in an inappropriate situation.

    You wouldn't use a chainsaw to slice a lemon in a cramped kitchen, nor would you use a carving knife to cut down a tree. C++ is no different - it has its place and where it is used appropriately it is positively unmatched. Likewise, when used inappropriately it'll cause massive amounts of bloodshed.

    The same can be said for just about any language. Deal with it.

    ++

  • SOme dude (unregistered) in reply to AGray
    AGray:
    java.lang.Chris;:
    Sadly, there's a lot of odd code that a C++ compiler will happily compile:
    [chris@titwank ~]$ cat ouch.cpp 
    #include <iostream>
    #include <string>
    
    using namespace std;
    
    int
    main()
    {
        string s = "Are we having fun yet?";
        int value = 0xdeadbeef;
        cout << s.c_str() + value << endl;
        return 0;
    }
    [chris@titwank ~]$ c++ -o ouch ouch.cpp 
    [chris@titwank ~]$ ./ouch 
    Segmentation fault (core dumped)
    

    I am TRWTF - I do not follow (exactly) what's happening here. I can see the bit where that hex offset is added to the output stream. I'm not sure how this causes a Segmentation fault...explanation please?

    s.c_str() is a character array, so it's essentially a pointer to a character (the beginning of the string) adding that hex value to that pointer likely puts the pointer outside the bounds of the knowable C universe and the output stream tries to display the data at an invalid adress

  • (cs) in reply to rjb
    rjb:
    Perhaps I misworded what I said. What I meant was compiles properly, and exhibits the desired behavior. If it doesn't exhibit the desired behavior, then I wouldn't say it compiled properly, it just compiled. :p
    I beg to differ.

    If it compiled, and it behaved the way the code was written, then it compiled correctly. If it does not exhibit the desired behavior, you have a logic problem, and that is your fault. It still compiled properly.

    Remember: the computer is a high speed idiot. It does exactly what you tell it to. Its your fault if your code doesn't exhibit the desired behavior, not the compilers.

  • Meep (unregistered) in reply to Severity One
    Severity One:
    TRWTF is pointers.

    No, TRWTF is null-delimited, mutable strings. We still use them 30 years after they've proven to be a total security disaster. Because they're "simple."

    Every time a project manager says, "keep it simple, stupid", if you don't slap the shit out of him, God will make an earthquake kill a village full of orphans and puppies.

  • Mitch (unregistered) in reply to Meep
    Meep:
    Severity One:
    TRWTF is pointers.

    No, TRWTF is null-delimited, mutable strings. We still use them 30 years after they've proven to be a total security disaster. Because they're "simple."

    Every time a project manager says, "keep it simple, stupid", if you don't slap the shit out of him, God will make an earthquake kill a village full of orphans and puppies.

    Forget Strings. Just use space delimited words or full-stop (period) delimited sentences.

    50 characters should be enough to store a word (the scientists and medical researchers who think it's cool to use longer words can make longer words themselves). 1000 character sentence limits should be reasonable too.

    The whole concept of a string is a bit like the whole concept of a number - and we had a pleasant conversation the other day about the whole concept of numbers....

    At the end of the day, we store DATA. Data is just arbitrary sequences of characters (expressed one way or another). Array's make perfect sense to store data, but the limits have to be controlled or tracked. It is fine to have a dynamically allocated array (in C terms memory that we might realloc) provided you always know how long it is (or at least the maximum length it's supposed to be). Nothing wrong with the string being mutable either - provided you do some checking on what is being mutated. Null delimiters, on the other hand, are a little silly because they force the exclusion of null in your data. That said, for many applications (like text processing) they work reasonably sufficiently.

  • pangalactic (unregistered) in reply to lucidfox

    So true - I only ever wrote my own string implementation once. It was for converting between char *, std::string, wchar_t *, std::wstring, ATL::CAtlString, ATL::CStringT<TCHAR, StrTraitMFC<TCHAR> > &, ATL::CStringT<TCHAR, StrTraitMFC_DLL<TCHAR> > &, ATL::CComBSTR, _bstr_t, _variant_t, and their const references.

  • Jim (unregistered) in reply to lucidfox
    lucidfox:
    TRWTF is rolling out their own string implementation.

    You know, as if C++ doesn't have enough of these already.

    Just a thought - why is there already so many string implementations?

    Perhaps it's because over time people realize that the ones available might not work well for their purpose....

    What other functionalities do languages (and perhaps more to the point people who build libraries for said languages) provide multiple variants on because noone really does them veyr well?

  • Zirias (unregistered) in reply to Mason Wheeler
    Mason Wheeler:
    java.lang.Chris;:
    Of course, TRWTF is C++. It could have been an elegant and simple object oriented superset of C, but instead it's a glorious example of how not to design a programming language.

    Is it even possible to create an elegant and simple superset of something that it itself neither elegant nor simple?

    C IS elegant and simple if a language replacing machine code with something readable and structured, but still close to the machine, is what you want. Of course, you'll never want to create "enterprise" software in C :)

    Oh and -- as long as you know how to actually implement OO concepts yourself, C gives you everything you need: pointers, structs and typedefs. So that's just another option for OOP "on top of" C.

  • (cs) in reply to Asdlarfgs
    Asdlarfgs:
    Steve The Cynic:
    So, to present a message like "Elements found: {number}", he would exploit JavaScript's unnatural fascination with automatic type conversions:
      var value = calculationFunction( parameters );
      var message = "Elements found: " + value;
    How... how is that "exploiting JavaScript's unnatural fascination with automatic type conversions"? That's just the natural way to do it in most dynamic languages.
    The fact that it is the natural way to do it doesn't make it a good idea. I think I'd be safe in saying that the easiest way to make bugs hard to find is for the language itself to silently work around their consequences.

    Arguably the correct response to string+number should be a compiler error, as there isn't a single natural obvious interpretation in human terms.

    Sure, in C and C++, given the implied type of "Elements found: " (array of 17 const chars in C++, const pointer to const char in C, IIRCWIPD), the crash-inducing interpretation that he stumbled across is correct, if not entirely reasonable. And sure, in JavaScript, the interpretation of string+number as "convert the thing on the right to a string and construct a new string which is the left one with the right one concatenated to it" is also correct, but given the length of the description, perhaps equally unreasonable.

    So, we have two correct but deeply incompatible concepts of what to do with this particular construct, so I'd say that, as a context-free generalisation, neither of them is correct.

    Python, also a dynamic language, has the right idea: (2.7.2, YMMV on other versions)

    "abc" + 5 Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: cannot concatenate 'str' and 'int' objects

  • Captain Fairly Understandable (unregistered) in reply to Zylon
    Zylon:
    pitchingchris:
    I disagree. There are plenty of useful things you can do with operater overloading.
    But is there anything you can do with operator overloading that you can't do without it?
    No...
    Y'know, other than making your code harder to decipher for maintenance programmers? Operator overloading always seemed like self-indulgent syntactic sugar to me.
    The key is making the class self-documenting enough that it's obvious what overloaded operators do.

    The pay-off of operator overloads vs. methods is shorter code, which is in itself an aid to readability if built on good foundations. E.g.

    myStr + str2 + str3 + str4

    vs

    MyStr.append(str2).append(str3).append(str4)

    .

  • (cs) in reply to Cbuttius
    Cbuttius:
    Calling a pointer a reference doesn't mean it is not a pointer.
    No. There are a few key differences. References maintain some ownership of the referenced item, pointers don't. You can do certain types of arithmetic on a pointer (i.e., there's limited interconversion with integers) but references don't support any of that. While yes, pointers are part of the obvious way you'd implement references, there are distinct differences that make references simultaneously quite a bit less powerful and much safer, a good trade-off in a lot of situations.

    C++'s references are a particularly moronic way of doing references, which is why it's hard to think of another language that does them that way. By contrast, vast numbers of languages use references. (Heck, I suspect they're in Lisp and that predates C by 14 years.)

  • (cs) in reply to Stev
    Stev:
    You wouldn't use a chainsaw to slice a lemon in a cramped kitchen, nor would you use a carving knife to cut down a tree. C++ is no different - it has its place and where it is used appropriately it is positively unmatched. Likewise, when used inappropriately it'll cause massive amounts of bloodshed.
    The thing is, though, that C++ has a pretty small niche where it truly shines. In my opinion, this is mostly caused by (a) the syntax and (b) the lack of a standardised set of libraries until much later. And because there are plenty of situations where the lowered performance is acceptable as a trade-off for greater stability (meaning that it's more difficult to shoot yourself in the foot).
  • Cbuttius (unregistered) in reply to pangalactic
    pangalactic:
    So true - I only ever wrote my own string implementation once. It was for converting between char *, std::string, wchar_t *, std::wstring, ATL::CAtlString, ATL::CStringT<TCHAR, StrTraitMFC<TCHAR> > &, ATL::CStringT<TCHAR, StrTraitMFC_DLL<TCHAR> > &, ATL::CComBSTR, _bstr_t, _variant_t, and their const references.

    You missed vector<char> and vector<wchar_t>

  • gallier2 (unregistered) in reply to Mason Wheeler

    D, really. Unfortunately not widespread enough to be really used in production but still the best successor to C yet.

  • Cbuttius (unregistered) in reply to danbruc
    danbruc:
    Cbuttius:
    Calling a pointer a reference doesn't mean it is not a pointer.
    That's not true. In C# there is a single thing you can do with a reference - dereference it. A reference is either a null reference or a reference to an object of its specific type. You are neither allowed to use pointer math to point to the interior of an object or to random garbage in memory, nor are you allowed to cast a reference to an arbitrary type without generating at least an invalid cast exception at runtime (unless the cast is valid of course).

    The primary difference appears to be that if you perform an illegal pointer operation that in C++ produces "undefined behaviour" and in C# produces a runtime exception, it's easier to recover or debug in C#.

    Either way it's still a bug. And either way, your code compiles then fails at runtime. The ideal solution for a bug is a compile-time error so that it never gets released.

    In C++ a pointer can "double up" as an iterator in a contiguous memory buffer thus allowing you to perform arithmetic on it. This is inherited from C and being "closer to the hardware". In managed languages there is no concept of a "contiguous buffer" as you don't care how the data is actually stored in memory, it is just a concept of a storage of multiple data objects. So you have to use the language iterators to move between them.

    In C++, it is exactly this optimisation that allows you to handle large collections more efficiently.

    Some of the real WTFs in C++ as a language are not being mentioned here. These include:

    • having to define virtual destructor into base classes to make them delete properly.

    • delete and delete[]. Ideally the compiler should just know.

    • iostream: rather horrific

    in addition to the lack of a standard for libraries (DLLs, shared objects, etc) as well as loads of other stuff that should ideally be in a standard library but is not.

  • Cbuttius (unregistered) in reply to gallier2
    gallier2:
    D, really. Unfortunately not widespread enough to be really used in production but still the best successor to C yet.

    Because it needs a huge library base which it doesn't have.

  • Zirias (unregistered) in reply to Cbuttius
    Cbuttius:
    danbruc:
    Cbuttius:
    Calling a pointer a reference doesn't mean it is not a pointer.
    That's not true. In C# there is a single thing you can do with a reference - dereference it. A reference is either a null reference or a reference to an object of its specific type. You are neither allowed to use pointer math to point to the interior of an object or to random garbage in memory, nor are you allowed to cast a reference to an arbitrary type without generating at least an invalid cast exception at runtime (unless the cast is valid of course).
    The primary difference appears to be that if you perform an illegal pointer operation that in C++ produces "undefined behaviour" and in C# produces a runtime exception, it's easier to recover or debug in C#.
    Although this is technically correct, you DO realize the only "illegal pointer operation" possible on a C# reference IS dereferencing the null reference? As opposed to all the dangling pointer issues imaginable in C/C++?

    That's what makes a reference a reference. It's the same when it comes to implementation on the (virtual or physical) machine -- but in terms of the language, it's something completely different. You can manipulate a pointer as you wish, you cannot manipulate a reference -> less powerful, more secure.

    C# knows pointers as well, but clearly separates them from references (cannot refer a C# object using a C# pointer) and only allows them in a discouraged "unsafe" context.

    TRWTF is C++ using both concepts at the same time and interchangeably

  • Cbuttius (unregistered) in reply to Mitch
    Mitch:
    Meep:
    Severity One:
    TRWTF is pointers.

    No, TRWTF is null-delimited, mutable strings. We still use them 30 years after they've proven to be a total security disaster. Because they're "simple."

    Every time a project manager says, "keep it simple, stupid", if you don't slap the shit out of him, God will make an earthquake kill a village full of orphans and puppies.

    Forget Strings. Just use space delimited words or full-stop (period) delimited sentences.

    50 characters should be enough to store a word (the scientists and medical researchers who think it's cool to use longer words can make longer words themselves). 1000 character sentence limits should be reasonable too.

    The whole concept of a string is a bit like the whole concept of a number - and we had a pleasant conversation the other day about the whole concept of numbers....

    At the end of the day, we store DATA. Data is just arbitrary sequences of characters (expressed one way or another). Array's make perfect sense to store data, but the limits have to be controlled or tracked. It is fine to have a dynamically allocated array (in C terms memory that we might realloc) provided you always know how long it is (or at least the maximum length it's supposed to be). Nothing wrong with the string being mutable either - provided you do some checking on what is being mutated. Null delimiters, on the other hand, are a little silly because they force the exclusion of null in your data. That said, for many applications (like text processing) they work reasonably sufficiently.

    It is somewhat interesting that strings are considered unlimited but numbers are limited to a certain number of bits and you need lots of different versions to be able to deal with that and there isn't even an unlimited version for when you need one.

    Also still a WTF in C++ that numbers are not standardly portable.

    The lack of an immutable reference-copied string class is a shortcoming in the C++ library. A lot of the time you want a "create in one place then pass around a lot" string. It is fairly trivial to write one too, and should ideally "copy by value" short strings (using a local buffer).

    Then you have all the issues with unsigned char, char and signed char when in reality you probably want char to be unsigned most of the time unless you want to handle "tiny" ints that might be negative. I don't recall ever having such a situation (where I couldn't use int instead). In this case you can then use the same data structure for binary blobs as you would use for textual strings, i.e. they are just a collection of bytes.

    Binary representation of structured data, i.e. marshalling and unmarshalling for sending across networks, storing in files etc. is badly supported by the C++ standard libraries. iostream even "fools" you by allowing you to set a "binary" mode. I remember when I was a naive beginner and after setting the stream to binary, didn't understand why my numbers were still printing as text....

    I think I stuck with FILE* functions long into my C++-writing career, whereas others seem to write all their code in C except for iostream, something I never understand. Why use C++ only for its weakest feature?

  • Sten (unregistered) in reply to Severity One

    TRWTF is stupid developer using pointers. Seriously, there should be C++ switch that would allow using raw pointers only to experienced developers because they know that they should not use them unless absolutely necessary and even then they should be very very very careful.

  • (cs)

    That's brillant!

  • Anonandon (unregistered) in reply to Cbuttius
    Cbuttius:
    <snip>

    Some of the real WTFs in C++ as a language are not being mentioned here. These include:

    • having to define virtual destructor into base classes to make them delete properly.

    • delete and delete[]. Ideally the compiler should just know.

    • iostream: rather horrific

    in addition to the lack of a standard for libraries (DLLs, shared objects, etc) as well as loads of other stuff that should ideally be in a standard library but is not.

    I was about to say "all protected/public methods not being virtual by default", but would this force the compiler to create inheritance infrastructure that you knew was never going to be used?
  • Anonandon (unregistered) in reply to Sten
    Sten:
    TRWTF is stupid developer using pointers. Seriously, there should be C++ switch that would allow using raw pointers only to experienced developers because they know that they should not use them unless absolutely necessary and even then they should be very very very careful.

    I guess there'd have to be an exception for header files included by your source, since any of these might have raw pointers in its declarations, and at worst might even include code that allocated or released memory?

  • Fell (unregistered) in reply to Severity One
    Severity One:
    TRWTF is pointers.

    -- Umm yeah... because memory's really useful without ADDRESSES

  • Cbuttius (unregistered) in reply to Anonandon
    Anonandon:
    I was about to say "all protected/public methods not being virtual by default", but would this force the compiler to create inheritance infrastructure that you knew was never going to be used?

    You can have private virtual methods and they make sense. Many developers like a pattern of having public non-virtual methods calling private virtual ones. Sometimes protected virtual works better here so that one implementation can call its base-class implementation in addition to adding to it.

    If it wre possible to extend the language, you could allow "interface" as well as "class" and "struct" and when a class is declared with "interface" all methods are virtual by default. Maybe even pure virtual. An interface would not have a user-defined constructor, would automatically have a virtual destructor and would not be copyable (other than with possible clone() function) or assignable.

    There would be no need to create a compilation unit to house its v-table (i.e. its virtual destructor implementation that is empty) because all compilers would view it exactly the same (there would be standard v-table implementation).

    This is not essential though. We can manage without it. It doesn't add anything new to the language you can't already do if you are skilled in it. Yes, virtual void whatever() = 0; looks messy but it works and there are more important issues to address in my opinion.

    If I were writing a new language or redesigning one or whatever, I would keep the concept of "headers" to show the interface of a class and keep it separated from its implementation. I would however change the way it currently works in C++ where a header file is simply code that gets added to the source of the compilation unit. Instead a header would be a fully-blown class and function definition file that would get compiled separately and what is defined there would be added to some kind of "database". Then your compilation units would be compiled on top of that. Of course it would get tricky when it comes to templates (or generics), meta-programming, type-traits and partial specialization. type traits would probably become a language feature so when a class/type is defined, you would also define what traits it has.

  • ¯\(°_o)/¯ I DUNNO LOL (unregistered) in reply to Zirias
    Zirias:
    Oh and -- as long as you know how to actually implement OO concepts yourself, C gives you everything you need: pointers, structs and typedefs. So that's just another option for OOP "on top of" C.
    I've done it before with function pointers. But C's pointer-to-function syntax is its own whole can of WTF.
    Cbuttius:
    - iostream: rather horrific
    I am still firmly convinced that the whole point of the iostream library was "Hey lookie here, I can overload the left shift and right shift operators to do I/O thingies!" (And now I have a vision of Cletus from The Simpsons saying that.)

    So how come when people defend operator overloading, it's always "BUT MAAAAATRIX MATH!" Because so many people need to do matrix math.

    Cbuttius:
    Then you have all the issues with unsigned char, char and signed char when in reality you probably want char to be unsigned most of the time unless you want to handle "tiny" ints that might be negative. I don't recall ever having such a situation (where I couldn't use int instead).
    It's so much fun when I'm trying to display bytes as hex using %02X format and I get FFFFFFC9, etc. in the output. Sure, some compilers let you set char to default as unsigned, but how you specify that is basically non-portable.
    Cbuttius:
    I think I stuck with FILE* functions long into my C++-writing career, whereas others seem to write all their code in C except for iostream, something I never understand. Why use C++ only for its weakest feature?
    Because it holds your hand. (Except when you forget and leave the stream in hex or some strange formatting mode. Then it abandons you in the desert.) And printf format specifiers are haaaaaaaard.

    That's the one feature of C++ that I absolutely refuse to use. It wasn't needed, and it's just a stunt trick to justify operator overloading abuse.

  • (cs) in reply to Cbuttius
    Cbuttius:
    You can have private virtual methods and they make sense. Many developers like a pattern of having public non-virtual methods calling private virtual ones. Sometimes protected virtual works better here so that one implementation can call its base-class implementation in addition to adding to it.
    Isn't "private" meant to imply that it's, well, private? In Java, you can't have private abstract methods, partly because it's discouraged to invoke anything that could be overridden from the constructor.
    Cbuttius:
    If it wre possible to extend the language, you could allow "interface" as well as "class" and "struct" and when a class is declared with "interface" all methods are virtual by default. Maybe even pure virtual. An interface would not have a user-defined constructor, would automatically have a virtual destructor and would not be copyable (other than with possible clone() function) or assignable.

    There would be no need to create a compilation unit to house its v-table (i.e. its virtual destructor implementation that is empty) because all compilers would view it exactly the same (there would be standard v-table implementation).

    This is not essential though. We can manage without it. It doesn't add anything new to the language you can't already do if you are skilled in it. Yes, virtual void whatever() = 0; looks messy but it works and there are more important issues to address in my opinion.

    If I were writing a new language or redesigning one or whatever, I would keep the concept of "headers" to show the interface of a class and keep it separated from its implementation. I would however change the way it currently works in C++ where a header file is simply code that gets added to the source of the compilation unit. Instead a header would be a fully-blown class and function definition file that would get compiled separately and what is defined there would be added to some kind of "database". Then your compilation units would be compiled on top of that. Of course it would get tricky when it comes to templates (or generics), meta-programming, type-traits and partial specialization. type traits would probably become a language feature so when a class/type is defined, you would also define what traits it has.

    So basically, you've described Java.

  • Asdlarfgs (unregistered) in reply to Zylon
    Zylon:
    pitchingchris:
    I disagree. There are plenty of useful things you can do with operater overloading.
    But is there anything you can do with operator overloading that you can't do without it? Y'know, other than making your code harder to decipher for maintenance programmers? Operator overloading always seemed like self-indulgent syntactic sugar to me.
    Fucking no. As they said:
    Captain Fairly Understandable:
    myStr + str2 + str3 + str4
    vs
    MyStr.append(str2).append(str3).append(str4)
    .
    History Teacher:
    Do not underestimate syntactic sugar.

    There is something I think people tend to forget. EVERY DETAIL IS IMPORTANT. Yes, of course you can do everything without operator overloading. You can also do everything in assembly. Do you program in assembly? Or Turing machines. Those little conveniences are the whole reason to use a language, and the whole reason people derail threads every day to bash or defend some language. (Portability? No, assembly is portable too. You just have to write a program that translates it to another

    Of course, you could argue that the inconvenience of not knowing if an operator is calling a function or not is worse than the convenience of operator overloading. Arguably there could be ways to mitigate this. Or the inconvenience of having to implement it, which makes you have less tools available. But "you don't technically need it" is never a good argument.

  • Mike (unregistered) in reply to Steve The Cynic

    Typeless languages scare the heck out of me... then again so do languages that make type conversion too easy...

  • senior moron (unregistered)

    TRWTF is all you arrogant bastages think you know best. I've never seen such a bunch of knuckleheads so full of themselves.

  • (cs) in reply to Jack
    Jack:
    Dave-Sir:
    However, because of precedence rules
    Precedence? WTF is precedence? That looks like a Very Big Word. I should not have to know it. I should not have to know anything. The computer should just do what I want.

    Programming languages are TRWTF. Nobody wants to type commands. So boring. And hard, dammit! There should just be a big round candy colored button and when I click it, it reads my mind and Just Works. If you can't design something to cater to my fantasy, you are no good.

    Ah, a Mac guy. At first I thought, "troll," then I caught on from the use of, "candy colored." You know, like the cute little iMacs that no one uses any more.

  • sdlkfh (unregistered) in reply to senior moron
    senior moron:
    TRWTF is all you arrogant bastages think you know best. I've never seen such a bunch of knuckleheads so full of themselves.
    With a hint of irony, I present......"senior moron" *clapping*
  • Imogen (unregistered) in reply to Severity One
    Severity One:
    Stev:
    You wouldn't use a chainsaw to slice a lemon in a cramped kitchen, nor would you use a carving knife to cut down a tree. C++ is no different - it has its place and where it is used appropriately it is positively unmatched. Likewise, when used inappropriately it'll cause massive amounts of bloodshed.
    The thing is, though, that C++ has a pretty small niche where it truly shines. In my opinion, this is mostly caused by (a) the syntax and (b) the lack of a standardised set of libraries until much later. And because there are plenty of situations where the lowered performance is acceptable as a trade-off for greater stability (meaning that it's more difficult to shoot yourself in the foot).
    Horses for Courses. There are situations I'd consider using C++ and situations I wouldn't.

    Similarly, there are situations I'd consider using Java/C#/<ruby/LISP/Miranda/Prolog/<insert any language here> and situations I wouldn't.

  • Bill (unregistered)

    It frightens me a little that there's so much language-bashing here. A bad tradesman always blames his tools. Sure some languages are easier to use than other, some better suited to some task or the other, but the question of what is better depends on the purpose and, to a degree, the user. Higher level languages are often used to create business applications - because there's a perception that they are more idiot-proof and allow reasonable complexity to be programmed fairly quickly, even with inexperienced developers. Lower level languages tend to have a reputation for tasks that a closer to the hardware (such as drivers and embedded systems {even with no file system}) and are also used for tasks that need to flog the processor (scientific calculation might be one example).

    Take a look around the world at off-the-shelf software you buy, and see if you can work out what languages are most common in the 'home user' environment.

  • Zirias (unregistered) in reply to Bill
    Bill:
    A bad tradesman always blames his tools.
    And we all know tradesmen cannot handle tools anyway ...
  • (cs) in reply to Bill
    Bill:
    It frightens me a little that there's so much language-bashing here.
    Only C++ bashing from me, but that's my pet hate. I like C, I like Java, but C++ is an abomination (in my opinion, of course).
    Bill:
    A bad tradesman always blames his tools.
    You may want to rephrase that. :)
    Bill:
    Sure some languages are easier to use than other, some better suited to some task or the other, but the question of what is better depends on the purpose and, to a degree, the user. Higher level languages are often used to create business applications - because there's a perception that they are more idiot-proof and allow reasonable complexity to be programmed fairly quickly, even with inexperienced developers. Lower level languages tend to have a reputation for tasks that a closer to the hardware (such as drivers and embedded systems {even with no file system}) and are also used for tasks that need to flog the processor (scientific calculation might be one example).
    Sure, but you go and try to exploit a buffer overrun in Java.

    I still see the occasional security advisory which would allow some exploit, sometimes even a remote root exploit. And invariably, this has to do improper handling of byte arrays.

    So even though most system software is written in C (or C++), it wouldn't hurt (except performance) if this became some managed version of C. Not all the way to the virtualisation that Java offers, but some extra checks, particularly boundary checks.

  • foxyshadis (unregistered) in reply to Mike
    Mike:
    Typeless languages scare the heck out of me... then again so do languages that make type conversion too easy...

    In that case, I would assume that inheritance and void pointers scare you too. Every language has some way around static typing.

    In languages with true dynamic typing, the one and only legitimate use of it is when you can get various types into a single variable and want to coerce them all into one type, without being too limited in what you'll initially accept. All other uses are laziness in my experience. The more common scenario, where one variable has one type for its entire lifetime, is extremely useful whether or not that type is declared or made implicit. Same benefits either way, which is why C# got the 'var' keyword.

  • Cbuttius (unregistered) in reply to Severity One
    Severity One:
    Cbuttius:
    You can have private virtual methods and they make sense. Many developers like a pattern of having public non-virtual methods calling private virtual ones. Sometimes protected virtual works better here so that one implementation can call its base-class implementation in addition to adding to it.
    Isn't "private" meant to imply that it's, well, private? In Java, you can't have private abstract methods, partly because it's discouraged to invoke anything that could be overridden from the constructor.

    Not exactly, private means the access is restricted only to the class in which it is defined and its friends. It doesn't mean you are not supposed to know it exists. At first programmers did find it difficult to comprehend though, so much so that the "official" C++ faq (the one at parashift.com) advised programmers to use protected. I usually prefer protected for the reason I described earlier.

    Cbuttius:
    If it wre possible to extend the language, you could allow "interface" as well as "class" and "struct" and when a class is declared with "interface" all methods are virtual by default. Maybe even pure virtual. An interface would not have a user-defined constructor, would automatically have a virtual destructor and would not be copyable (other than with possible clone() function) or assignable.

    There would be no need to create a compilation unit to house its v-table (i.e. its virtual destructor implementation that is empty) because all compilers would view it exactly the same (there would be standard v-table implementation).

    This is not essential though. We can manage without it. It doesn't add anything new to the language you can't already do if you are skilled in it. Yes, virtual void whatever() = 0; looks messy but it works and there are more important issues to address in my opinion.

    If I were writing a new language or redesigning one or whatever, I would keep the concept of "headers" to show the interface of a class and keep it separated from its implementation. I would however change the way it currently works in C++ where a header file is simply code that gets added to the source of the compilation unit. Instead a header would be a fully-blown class and function definition file that would get compiled separately and what is defined there would be added to some kind of "database". Then your compilation units would be compiled on top of that. Of course it would get tricky when it comes to templates (or generics), meta-programming, type-traits and partial specialization. type traits would probably become a language feature so when a class/type is defined, you would also define what traits it has.

    So basically, you've described Java.

    No, in Java you cannot define a class header and put the implementation elsewhere. I cannot look at a header for a class to see what its methods are.

    A good extension of C++ would be to allow this:

    class Derived : public Base;
    

    as a partial forward declaration. I don't think that's even allowed in C++11.

    Another thing that should be allowed is:

    typename std::string;
    

    or

    namespace std { typename string; }
    

    You cannot do

    namespace std { class string; }
    

    because string is not actually a class, it is a typedef (alias) for basic_string<char, char_traits<char>, allocator<char>

    Of course if we got rid of headers the way they are you would simply put in

    using std::string;

    and that would automatically allow you to use std::string in your project with all its methods. Headers would be used to create your own classes and functions.

  • Cbuttius (unregistered) in reply to ¯\(°_o)/¯ I DUNNO LOL

    [quote user="Cbuttius"]- iostream: rather horrific[/quote]I am still firmly convinced that the whole point of the iostream library was "Hey lookie here, I can overload the left shift and right shift operators to do I/O thingies!" (And now I have a vision of Cletus from The Simpsons saying that.) ...

    That's the one feature of C++ that I absolutely refuse to use. It wasn't needed, and it's just a stunt trick to justify operator overloading abuse.[/quote]

    ostringstream is the best option the library provides as a string builder.

    The iomanip feature is all wrong... If I want to print a char in "%02x" format I want to do just that... not change the stream so that everything I output will be formatted that way from now on until I change it back.

    You can't conveniently use istream >> operator to read back what you wrote in the same way that you wrote it because it has no way of knowing where tokens end.

    Output the numbers 1, 2, 3 in that order and read back and it will read 123 then not find any more numbers. Output a string "Hello world" and it will read back only "Hello" as it thinks the space denotes the end of the string.

    In any case it is flawed that istream >> should be used to write into "objects". In reality you load into factories and then use those factories to create objects. Yes, I know the factories are also objects, but you write directly from the objects but read back into the factories.

    I am not sure how other languages handle this any better.

    Of course, there is more than one way to represent an object in persistent format and you should be able to decouple these whilst ensuring your model is extensible in both directions. (Where visitor pattern in its classic form fails but works with adapter pattern clipped on).

  • null (unregistered) in reply to Severity One
    Severity One:
    History Teacher:
    Severity One:
    TRWTF is pointers.
    ...are you the author of this particular representative line, by any chance?
    No.
    History Teacher:
    TRWTF are coders who don't understand pointers, yet touch any language less separated from hardware than crossplatform HTML+javascript.
    Oh, I understand pointers, but they are too easy to muck up, especially with C++'s Byzantine syntax, and an object-oriented language offers enough tools to pretend that pointers do not exist (such as Java does). Unless you're doing serious low-level or high-performance computing, there isn't much use for pointers.

    Ever heard of linked lists, passing a variable 'by reference', etc.?

  • Otherwise also known as ... (unregistered) in reply to Cbuttius
    Cbuttius:
    Severity One:
    So basically, you've described Java.
    No, in Java you cannot define a class header and put the implementation elsewhere.
    Surely you can. However the class header in the Java world is named "interface".

    CAPTCHA: usitas - Take the interface and usitas class header.

Leave a comment on “Lucky Pointing”

Log In or post as a guest

Replying to comment #:

« Return to Article