- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
Everyone can do that with C++
But with languages like Pascal, you have no chance
Admin
But yeah, first post and all that, never mind :)
Admin
But operator overloading in certainly something, which should not be used without careful design. As such, using it should be limited to well-documented libraries, and in actual application they should be used only as documented in the library. If it's not worth designing and documenting that carefully, then better use normal methods with self-documenting names. In C++, same applies to templates.
Admin
Pointers are not hard (except when you first start - but I suspect this is largely just because they overwhelm people. One day you have that 'aha' moment where suddenly the difference between * and & (and . and ->) make sense. If this doesn't happen within a month of using C/C++, get a job using a different language....).
Admin
I work with people all day who have never seen anything other than C# and/or Java. In itself, this is no biggie, but I cringe at how wastefully they create Objects simply because they don't seem to understand the work being done under the hood (whatever people think about C or C++ having to explicitly create and destroy objects (or allocate/free memory in C's case) gives you an understanding that there is effort involved. Simply forgetting about a reference we don't need any more promotes an attitude that the resources don't matter). Granted, you don't have to understand pointers per se' to understand the work being done, but I think it is important that people do at least a semester or two in a low-level language so they understand what is actually going on. I think one of the real problems with many of today Graduates in IT disciplines is that they are too willing to let the compiler optimize away their bad habits.
One of the examples I saw recently was a 'programmer' who implemented a small bitfield (4 bits) as a String in C# (and I was a bit concerned when I pointed out it might not be the best way to do it, a more senior programmer disagreed).
A common argument I hear is "It doesn't matter, because we have so much memory and/or CPU available". When we're making a Desktop Sudoku solver this is probably true. When we're making an app that has to support multiple users querying a large database and returning results in realtime then you start to realise the importance of minimising resource consumption. (More frighteningly, when we had serious performance issues after a poor change on a fairly reasonably specced server, I was appalled that the common voice (even from people within IT) was "Throw more CPU at it")
Admin
I am TRWTF - I do not follow (exactly) what's happening here. I can see the bit where that hex offset is added to the output stream. I'm not sure how this causes a Segmentation fault...explanation please?
Admin
Admin
Admin
Admin
[quote user="Cbuttius"]Using the null character to denote end of string has its advantages but can be a WTF at times.
Prefixing the length means it is faster to calculate the length of the string but makes it less maintainable. Firstly the potential length of the string is limited by the length of its header, and secondly when you modify the string you have to modify this length.
In addition, there is the method of "tokenising" a string that into lots of strings by modifying the "separator" character into a null, allowing each component to be a string on its own. In fact sometimes you use a double zero to indicate the end of the sequence.
Of course for binary "blobs" there will often be embedded null characters but in general these are not modified or lengthened the same way strings often are.
The real WTF with C++ and strings is that the standard string is:
It wasn't that long ago that you couldn't even pass a std::string between DLLs / shared objects built with the same compiler.
All of which leads to you finding lots of "own implementations" of string, especially in legacy code.
Of course, legacy code is still around because people still use these projects that are being maintained and they brought in enough money to the business.
[/quote]
[quote=lucidfox] TRWTF is rolling out their own string implementation.
You know, as if C++ doesn't have enough of these already. [/quote] uhm.....
Admin
Admin
Admin
If it compiled, and it behaved the way the code was written, then it compiled correctly. If it does not exhibit the desired behavior, you have a logic problem, and that is your fault. It still compiled properly.
Remember: the computer is a high speed idiot. It does exactly what you tell it to. Its your fault if your code doesn't exhibit the desired behavior, not the compilers.
Admin
No, TRWTF is null-delimited, mutable strings. We still use them 30 years after they've proven to be a total security disaster. Because they're "simple."
Every time a project manager says, "keep it simple, stupid", if you don't slap the shit out of him, God will make an earthquake kill a village full of orphans and puppies.
Admin
50 characters should be enough to store a word (the scientists and medical researchers who think it's cool to use longer words can make longer words themselves). 1000 character sentence limits should be reasonable too.
The whole concept of a string is a bit like the whole concept of a number - and we had a pleasant conversation the other day about the whole concept of numbers....
At the end of the day, we store DATA. Data is just arbitrary sequences of characters (expressed one way or another). Array's make perfect sense to store data, but the limits have to be controlled or tracked. It is fine to have a dynamically allocated array (in C terms memory that we might realloc) provided you always know how long it is (or at least the maximum length it's supposed to be). Nothing wrong with the string being mutable either - provided you do some checking on what is being mutated. Null delimiters, on the other hand, are a little silly because they force the exclusion of null in your data. That said, for many applications (like text processing) they work reasonably sufficiently.
Admin
So true - I only ever wrote my own string implementation once. It was for converting between char *, std::string, wchar_t *, std::wstring, ATL::CAtlString, ATL::CStringT<TCHAR, StrTraitMFC<TCHAR> > &, ATL::CStringT<TCHAR, StrTraitMFC_DLL<TCHAR> > &, ATL::CComBSTR, _bstr_t, _variant_t, and their const references.
Admin
Perhaps it's because over time people realize that the ones available might not work well for their purpose....
What other functionalities do languages (and perhaps more to the point people who build libraries for said languages) provide multiple variants on because noone really does them veyr well?
Admin
C IS elegant and simple if a language replacing machine code with something readable and structured, but still close to the machine, is what you want. Of course, you'll never want to create "enterprise" software in C :)
Oh and -- as long as you know how to actually implement OO concepts yourself, C gives you everything you need: pointers, structs and typedefs. So that's just another option for OOP "on top of" C.
Admin
Arguably the correct response to string+number should be a compiler error, as there isn't a single natural obvious interpretation in human terms.
Sure, in C and C++, given the implied type of "Elements found: " (array of 17 const chars in C++, const pointer to const char in C, IIRCWIPD), the crash-inducing interpretation that he stumbled across is correct, if not entirely reasonable. And sure, in JavaScript, the interpretation of string+number as "convert the thing on the right to a string and construct a new string which is the left one with the right one concatenated to it" is also correct, but given the length of the description, perhaps equally unreasonable.
So, we have two correct but deeply incompatible concepts of what to do with this particular construct, so I'd say that, as a context-free generalisation, neither of them is correct.
Python, also a dynamic language, has the right idea: (2.7.2, YMMV on other versions)
Admin
The pay-off of operator overloads vs. methods is shorter code, which is in itself an aid to readability if built on good foundations. E.g.
vs
.
Admin
C++'s references are a particularly moronic way of doing references, which is why it's hard to think of another language that does them that way. By contrast, vast numbers of languages use references. (Heck, I suspect they're in Lisp and that predates C by 14 years.)
Admin
Admin
You missed vector<char> and vector<wchar_t>
Admin
D, really. Unfortunately not widespread enough to be really used in production but still the best successor to C yet.
Admin
The primary difference appears to be that if you perform an illegal pointer operation that in C++ produces "undefined behaviour" and in C# produces a runtime exception, it's easier to recover or debug in C#.
Either way it's still a bug. And either way, your code compiles then fails at runtime. The ideal solution for a bug is a compile-time error so that it never gets released.
In C++ a pointer can "double up" as an iterator in a contiguous memory buffer thus allowing you to perform arithmetic on it. This is inherited from C and being "closer to the hardware". In managed languages there is no concept of a "contiguous buffer" as you don't care how the data is actually stored in memory, it is just a concept of a storage of multiple data objects. So you have to use the language iterators to move between them.
In C++, it is exactly this optimisation that allows you to handle large collections more efficiently.
Some of the real WTFs in C++ as a language are not being mentioned here. These include:
having to define virtual destructor into base classes to make them delete properly.
delete and delete[]. Ideally the compiler should just know.
iostream: rather horrific
in addition to the lack of a standard for libraries (DLLs, shared objects, etc) as well as loads of other stuff that should ideally be in a standard library but is not.
Admin
Because it needs a huge library base which it doesn't have.
Admin
That's what makes a reference a reference. It's the same when it comes to implementation on the (virtual or physical) machine -- but in terms of the language, it's something completely different. You can manipulate a pointer as you wish, you cannot manipulate a reference -> less powerful, more secure.
C# knows pointers as well, but clearly separates them from references (cannot refer a C# object using a C# pointer) and only allows them in a discouraged "unsafe" context.
TRWTF is C++ using both concepts at the same time and interchangeably
Admin
It is somewhat interesting that strings are considered unlimited but numbers are limited to a certain number of bits and you need lots of different versions to be able to deal with that and there isn't even an unlimited version for when you need one.
Also still a WTF in C++ that numbers are not standardly portable.
The lack of an immutable reference-copied string class is a shortcoming in the C++ library. A lot of the time you want a "create in one place then pass around a lot" string. It is fairly trivial to write one too, and should ideally "copy by value" short strings (using a local buffer).
Then you have all the issues with unsigned char, char and signed char when in reality you probably want char to be unsigned most of the time unless you want to handle "tiny" ints that might be negative. I don't recall ever having such a situation (where I couldn't use int instead). In this case you can then use the same data structure for binary blobs as you would use for textual strings, i.e. they are just a collection of bytes.
Binary representation of structured data, i.e. marshalling and unmarshalling for sending across networks, storing in files etc. is badly supported by the C++ standard libraries. iostream even "fools" you by allowing you to set a "binary" mode. I remember when I was a naive beginner and after setting the stream to binary, didn't understand why my numbers were still printing as text....
I think I stuck with FILE* functions long into my C++-writing career, whereas others seem to write all their code in C except for iostream, something I never understand. Why use C++ only for its weakest feature?
Admin
TRWTF is stupid developer using pointers. Seriously, there should be C++ switch that would allow using raw pointers only to experienced developers because they know that they should not use them unless absolutely necessary and even then they should be very very very careful.
Admin
That's brillant!
Admin
Admin
I guess there'd have to be an exception for header files included by your source, since any of these might have raw pointers in its declarations, and at worst might even include code that allocated or released memory?
Admin
-- Umm yeah... because memory's really useful without ADDRESSES
Admin
You can have private virtual methods and they make sense. Many developers like a pattern of having public non-virtual methods calling private virtual ones. Sometimes protected virtual works better here so that one implementation can call its base-class implementation in addition to adding to it.
If it wre possible to extend the language, you could allow "interface" as well as "class" and "struct" and when a class is declared with "interface" all methods are virtual by default. Maybe even pure virtual. An interface would not have a user-defined constructor, would automatically have a virtual destructor and would not be copyable (other than with possible clone() function) or assignable.
There would be no need to create a compilation unit to house its v-table (i.e. its virtual destructor implementation that is empty) because all compilers would view it exactly the same (there would be standard v-table implementation).
This is not essential though. We can manage without it. It doesn't add anything new to the language you can't already do if you are skilled in it. Yes, virtual void whatever() = 0; looks messy but it works and there are more important issues to address in my opinion.
If I were writing a new language or redesigning one or whatever, I would keep the concept of "headers" to show the interface of a class and keep it separated from its implementation. I would however change the way it currently works in C++ where a header file is simply code that gets added to the source of the compilation unit. Instead a header would be a fully-blown class and function definition file that would get compiled separately and what is defined there would be added to some kind of "database". Then your compilation units would be compiled on top of that. Of course it would get tricky when it comes to templates (or generics), meta-programming, type-traits and partial specialization. type traits would probably become a language feature so when a class/type is defined, you would also define what traits it has.
Admin
So how come when people defend operator overloading, it's always "BUT MAAAAATRIX MATH!" Because so many people need to do matrix math.
It's so much fun when I'm trying to display bytes as hex using %02X format and I get FFFFFFC9, etc. in the output. Sure, some compilers let you set char to default as unsigned, but how you specify that is basically non-portable. Because it holds your hand. (Except when you forget and leave the stream in hex or some strange formatting mode. Then it abandons you in the desert.) And printf format specifiers are haaaaaaaard.That's the one feature of C++ that I absolutely refuse to use. It wasn't needed, and it's just a stunt trick to justify operator overloading abuse.
Admin
Admin
There is something I think people tend to forget. EVERY DETAIL IS IMPORTANT. Yes, of course you can do everything without operator overloading. You can also do everything in assembly. Do you program in assembly? Or Turing machines. Those little conveniences are the whole reason to use a language, and the whole reason people derail threads every day to bash or defend some language. (Portability? No, assembly is portable too. You just have to write a program that translates it to another
Of course, you could argue that the inconvenience of not knowing if an operator is calling a function or not is worse than the convenience of operator overloading. Arguably there could be ways to mitigate this. Or the inconvenience of having to implement it, which makes you have less tools available. But "you don't technically need it" is never a good argument.
Admin
Typeless languages scare the heck out of me... then again so do languages that make type conversion too easy...
Admin
TRWTF is all you arrogant bastages think you know best. I've never seen such a bunch of knuckleheads so full of themselves.
Admin
Ah, a Mac guy. At first I thought, "troll," then I caught on from the use of, "candy colored." You know, like the cute little iMacs that no one uses any more.
Admin
Admin
Similarly, there are situations I'd consider using Java/C#/<ruby/LISP/Miranda/Prolog/<insert any language here> and situations I wouldn't.
Admin
It frightens me a little that there's so much language-bashing here. A bad tradesman always blames his tools. Sure some languages are easier to use than other, some better suited to some task or the other, but the question of what is better depends on the purpose and, to a degree, the user. Higher level languages are often used to create business applications - because there's a perception that they are more idiot-proof and allow reasonable complexity to be programmed fairly quickly, even with inexperienced developers. Lower level languages tend to have a reputation for tasks that a closer to the hardware (such as drivers and embedded systems {even with no file system}) and are also used for tasks that need to flog the processor (scientific calculation might be one example).
Take a look around the world at off-the-shelf software you buy, and see if you can work out what languages are most common in the 'home user' environment.
Admin
Admin
I still see the occasional security advisory which would allow some exploit, sometimes even a remote root exploit. And invariably, this has to do improper handling of byte arrays.
So even though most system software is written in C (or C++), it wouldn't hurt (except performance) if this became some managed version of C. Not all the way to the virtualisation that Java offers, but some extra checks, particularly boundary checks.
Admin
In that case, I would assume that inheritance and void pointers scare you too. Every language has some way around static typing.
In languages with true dynamic typing, the one and only legitimate use of it is when you can get various types into a single variable and want to coerce them all into one type, without being too limited in what you'll initially accept. All other uses are laziness in my experience. The more common scenario, where one variable has one type for its entire lifetime, is extremely useful whether or not that type is declared or made implicit. Same benefits either way, which is why C# got the 'var' keyword.
Admin
No, in Java you cannot define a class header and put the implementation elsewhere. I cannot look at a header for a class to see what its methods are.
A good extension of C++ would be to allow this:
as a partial forward declaration. I don't think that's even allowed in C++11.
Another thing that should be allowed is:
or
You cannot do
because string is not actually a class, it is a typedef (alias) for basic_string<char, char_traits<char>, allocator<char>
Of course if we got rid of headers the way they are you would simply put in
and that would automatically allow you to use std::string in your project with all its methods. Headers would be used to create your own classes and functions.
Admin
[quote user="Cbuttius"]- iostream: rather horrific[/quote]I am still firmly convinced that the whole point of the iostream library was "Hey lookie here, I can overload the left shift and right shift operators to do I/O thingies!" (And now I have a vision of Cletus from The Simpsons saying that.) ...
That's the one feature of C++ that I absolutely refuse to use. It wasn't needed, and it's just a stunt trick to justify operator overloading abuse.[/quote]
ostringstream is the best option the library provides as a string builder.
The iomanip feature is all wrong... If I want to print a char in "%02x" format I want to do just that... not change the stream so that everything I output will be formatted that way from now on until I change it back.
You can't conveniently use istream >> operator to read back what you wrote in the same way that you wrote it because it has no way of knowing where tokens end.
Output the numbers 1, 2, 3 in that order and read back and it will read 123 then not find any more numbers. Output a string "Hello world" and it will read back only "Hello" as it thinks the space denotes the end of the string.
In any case it is flawed that istream >> should be used to write into "objects". In reality you load into factories and then use those factories to create objects. Yes, I know the factories are also objects, but you write directly from the objects but read back into the factories.
I am not sure how other languages handle this any better.
Of course, there is more than one way to represent an object in persistent format and you should be able to decouple these whilst ensuring your model is extensible in both directions. (Where visitor pattern in its classic form fails but works with adapter pattern clipped on).
Admin
Ever heard of linked lists, passing a variable 'by reference', etc.?
Admin
CAPTCHA: usitas - Take the interface and usitas class header.