- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
Admin
You've literally just provided an argument and an argument against your argument at the same time. We understand the point of syntactic sugar. Nobody is going to argue with that. It's a valuable thing that makes our lives easier. But in the end, syntax is in the realm of language design. When you allow people to define the the operator implementations in the language, you're effectively handing them the ability to write their own language.
There's a huge difference between not using any syntactic sugar (ie, not using any higher level languages) and being able to define your own language by overloading syntax features. C++ isn't a replacement for yacc/bison, so we shouldn't treat it as such. I don't care that you saved 7 characters by using a + instead of a .add, the latter is infinitely more clean and understandable by anyone. I don't have to second guess myself when reading your code at every point whether an operator is overloaded. And when using your code, I don't have to think should I call a function or does this object support an operator. When you want to invoke a custom operation on things, you call a function. This is convention. It's a good thing. It's a good thing for the same reason syntactic sugar is a good thing: it provides consistency to a set of operations, except in this case it's more meta regarding the language and how we use it, not regarding specific objects in some application, so hierarchically speaking, it's more important than sugar. And in the end, that's all syntax features in a language do for us: provide consistency. Operator overloading fundamentally breaks that consistency. To throw away good conventions to invent your own nonstandard sugar is TRWTF.
Admin
Admin
C++ has to allow pointer arithmetic because it is a feature of C, and C++ inherits all the features of C. It is not a complete superset in that
C++ allows "dangling" pointers to exist for the same reason. Plus the fact it is not garbage collected so you call "delete" (or free which still works) when you decide you no longer need the object / memory. If there are still pointers pointing to it, the compiler still does what you ask it to.
Java and C# are easier to debug because when you have a runtime error, it checks for you. But these continual checks also make your code slower when there are no bugs. In C++ you usually have the option of a debug build, where the checks are made, and the release build where they are not.
I do agree that exceptions are badly implemented in C++. But I think they are in C# and Java too. Using the type system for catching is a bad way of doing it. There should be a single exception class that has just a buffer of small header and a buffer of data. The header will tell you what kind of exception you have and the data section will give more detail. It should be a compiler option whether or not to include a call-stack in the exception.
C++ is often used to build DLLs / shared objects rather than applications and will often expose a C interface. But by writing the application in C++ you can take advantage of using STL for collections, shared_ptr and RAII for better resource and memory management, polymorphism and other positive features that makes C++ a easier language to write.
These shared objects are not only used for C programs. They are used for any language with C bindings, including many scripting languages.
This is C++ domain. So are the scripting language interpreters. So is writing these libraries that this "domain" can take advantage of.
Admin
But let's have a look at the following Java code:
In this case, you don't need a boundary check, because (a) the array is never re-assigned and (b) the index variable isn't modified outside the 'for' statement. Now whether the just-in-time compiler takes advantage of this is another issue, but the complaint against boundary checks doesn't always hold.
So instead of having multiple catch statements, and being able to take advantage of inheritance, you suggest that we should throw a switch statement at the exception header? That doesn't make any sense. Perhaps the exception system in Java and C# could be improved, but not in the way you describe it.And what are you going to do if your application crashes, and you don't have a stack trace because you thought it necessary to speed up exception handling? That doesn't make any sense.
So much time in computer programs is wasted on either disc, databases, network resources or user input, that it doesn't make sense to try and optimise everything, unless you're doing serious high-performance computing. But then you're in a completely different ballpark.
Admin
[quote user="Severity One[quote user="Cbuttius"]I do agree that exceptions are badly implemented in C++. But I think they are in C# and Java too. Using the type system for catching is a bad way of doing it. There should be a single exception class that has just a buffer of small header and a buffer of data. The header will tell you what kind of exception you have and the data section will give more detail. It should be a compiler option whether or not to include a call-stack in the exception.[/quote]
So instead of having multiple catch statements, and being able to take advantage of inheritance, you suggest that we should throw a switch statement at the exception header? That doesn't make any sense. Perhaps the exception system in Java and C# could be improved, but not in the way you describe it.
And what are you going to do if your application crashes, and you don't have a stack trace because you thought it necessary to speed up exception handling? That doesn't make any sense.
So much time in computer programs is wasted on either disc, databases, network resources or user input, that it doesn't make sense to try and optimise everything, unless you're doing serious high-performance computing. But then you're in a completely different ballpark.[/quote]
On using "switch" to find out the cause, essentially yes, albeit it doesn't have to be a switch statement, it could be an exception handling routine or factory or whatever. 99.9% of the time though I do a try..catch I don't put in multiple handlers. You tried something, it failed and you report back to the user why the operation did not work. Too often, exceptions are thrown from the "pImpl" using exception classes created by the pImpl and the whole point is to encapsulate the pImpl out so the user doesn't know the implementation. At user call level you want to know if it passed or failed and stop the execution immediately on failure. Java code is full of try..catch everywhere. In C++ you are more likely to let exceptions fall through to where they will be finally caught and handled, and there is no need for finally in C++ because you have destructors.
Exceptions are often not used on embedded systems exactly because they are too heavy. The ability to have a lighter exceptions system would get around this drawback. You always assume that every program runs on a big system with lots of resources. In such a case, a call stack could well be far too heavy and needs to be optional.
Admin
what does the last item in a linked list point to. Does it not point to null?
Admin
I completely disagree. Operator overloading has its place. It's use in C# string concatenation is an example. I much prefer:
myString = string1 + string2 + string3 over myString = string1.Add(string2).Add(string3)
It should be obvious when a class needs operator support, both to the creator and to its users. Blaming the feature because idiots misuse it is silly. If we removed every language feature that people abuse, we'd all be writing assembly.
Admin
It points to the single instance of the Null-Object!
Admin
It's not about idiots and it's not about actual misuse. It's about the fact that it brings uncertainty into a language. For instance, both of your examples are bad. I can't tell immediately whether those are actually strings you're working with because "+" is very generic and so is "Add". I can't tell if you're mutating objects or using immutables. The "+" operator in most languages and in Mathematics does not modify its operands, which is not necessarily the case for overloaded operators. In the case of overloaded operators, you leave someone guessing about everything. They have to read a lot more source code just to understand what you're doing. Contrast that with something like this:
myString = String.concat(string1, string2, string3);
Or:
myString = String.copy(string1); myString.append(string2).append(string3);
Immutable and mutable. The type is implied. It's very unlikely anyone will EVER confuse what this code does. That's the point, and what operator overloading violates. I don't care how pretty or succinct your code is. If that's really so important to you, please see: http://www.ioccc.org/. They have lots of new lovely tricks for you to use.
Admin
A pointer is simply a variable which is mostly of an integer type. It specifies a memory address where the actual object is stored. If you assign the null pointer to a pointer variable, you make the pointer point to no memory address. (Which makes member access impossible.)
In C and C++, the keyword was NULL, and what NULL really is is 0. It was decided that "0x0000" was never going to be a valid pointer to an object, and so that is the value which gets assigned to indicate that it is not a valid pointer.
pointers always point to something, even when "uninitialized". The issue is whether the address they hold is legal for your program to access. The special address zero (aka null) is deliberately not mapped into your address space, so a segmentation fault is generated by the memory management unit (MMU) when it is accessed and your program crashes. But since address zero is deliberately not mapped in, it becomes an ideal value to use to indicate that a pointer is not pointing to anything, hence its role as null. To complete the story, as you allocate memory with new or malloc(), the operating system configures the MMU to map pages of RAM into your address space and they become usable.
A null object is not really null, it is emtpy.
Admin
That is not true, a pointer may look like a number but it isn't defined to be one.
It is common for an operating system to allow you to store a pointer in a form whereby it fits in a number but that does not guarantee that it has to be physically that way.
In fact, in C and C++ there is a notion of a "contiguous" buffer which enables you to perform pointer arithmetic, but underneath the scenes, the memory may well be fragmented, just the same way that physical disk space might be even when one increases the "offset".
This is the job of the kernel.
It does actually make sense for 0 to be a valid pointer had it not been implemented into the language that 0 has a special meaning (e.g. you can call free or delete or delete[] on a null pointer and it is a no-op).
C++11 does introduce nullptr as a keyword but will have to remain backwardly compatible so as not to break all existing code.
Admin
Sigh. Another one falls into the trap of assuming everything is the same as what he works with.
Many small systems don't have MMUs of any sort, and do not crash in any way when address 0x0000 (an apt use of only four hex digits, because they are often limited to just 16 bits of address space) is accessed, either for read or for write. (Also, look up the charming message "null pointer assignment" for more on this. Hint: it's printed after a DOS program exits if offset 0 of the default data segment no longer contains the right canary value, indicating a write to *NULL.)
AS/400 systems (or whatever it is that IBM calls them now) do not use all-bits-zero as NULL pointers.
On at least some versions of AIX, there is readable-not-writable memory mapped at address 0x00000000, so programs that read from *NULL will not crash.
The results of dereferencing a NULL pointer in C and/or C++ are "undefined", and will therefore cause all sorts of grief. In the same vein, on some systems (e.g. AS400), this code will print FALSE.
Uninitialised pointers might contain a bit pattern that is not and cannot be a valid pointer, and might, when compared (not dereferenced), even cause crashes on some hardware, such as one that attempts to load a segmented pointer on x86 where the DPL indicates a ring-zero-only segment, but the current code-segment DPL is ring-three.
Also, most systems do not call the operating system to allocate memory per-malloc, but instead slice large blocks, because it is more efficient in many ways.
Admin
The comments show me one thing: the real WTF is C++