- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
FWIW, the attributes look like the external annotations for ReSharper. Very useful in my experience.
http://www.jetbrains.com/resharper/webhelp/Code_Analysis__External_Annotations.html
Admin
Seems like most programmers featured here use the mantra of "Flail fast."
Admin
This is exactly what I'm struggling to understand... how is your example different to: [code] if(someObjectThatMayBeNull != null) someObjectThatMayBeNull.DoSomething(); else ... explicit handling for this case... [code] Before using it, we have to check that the value is valid one way or another. Although I can see the problems that not checking null will cause in my example, I don't see how they're avoided in your example.
Maybe I'm missing something. Can you explain what would happen in your example if you didn't check HasValue but still called Value.DoSomething()? I'm assuming the answer is 'nothing' and I think this is worse than a null pointers issue - because we think we have done something and continue. Null pointers can kill my program, but at least that makes it quite clear something is wrong. This non-nullable approach looks like they just ignore issues when something is wrong.
The obvious answer, is that "good programming dictates that we check that the value is valid...", but this applies equally to NULL.
Admin
The problem with Java and such is that anything can be a null value; you always have to pollute your code checking each and every argument and return value for nulls, instead of letting the damn type system do its work.
Maybe and Option offer a way of allowing optional return values without forcing the insanity of null checks everywhere.
Admin
The difference is that the type system informs you that a value might not be valid. With null, any reference can be invalid and the type system doesn't help with that. You end up with null checks everywhere. If you add non-nullable types to this, then you know which variables MUST have been initialized to a valid value and which may be invalid. This is more prevalent in functional languages because you usually do not reassign variables.
Admin
Admin
I don't believe you should check validity any less just because you don't use NULL.
Admin
What would you rather have them use?
Admin
Validity should also be ensured by type system. Simply put, if a function (or method) has an argument of type X, it should return a valid response for every possible value for that argument.
Of course, this is often impossible in bad type systems like the one Java has, but it should be strive for, since it reduces runtime errors and improves the readability of preconditions.
Admin
Checking for NULL or checking for an Empty Variable of a particular type still requires a check to ensure a function returns a valid (or more importantly useful) value. Using an empty variable may ensure that a value is the correct type, but we still need to have some sort of check to make sure that the actual value is valid. To me it seems that by passing a dummy variable of a particular type merely gives us a false sense of confidence - the system may not break, but only because we silently miss the fact that this value is not actually valid. Perhaps I don't understand how these 'NULL Objects' work, but to me it sounds like they're an attempt to avoid an error in places where we want an error to be flagged.
Admin
I disagree with your argument on two points:
If you have
you can say that, in this case, conn is intended to be a SqlConnection. However, if we passed it to a method that operated on a supertype of IDBConnection, then within the scope of that method you've lost all information about the original intended type of conn. If, however, we used , then we have all the type information we would have with a real, useful SqlConnection instance. And in some circumstances that's important.Now, this is a bad example, since the Null Object has a semantic of "invalid", whereas the empty string is clearly a valid string. But consider our SqlConnection from above. What if we had
? If the call fails, we have essentially three options: throw an exception, return null, or return a Null Object. If we return null, we've either lost all error information, or we have to obtain it in some way from the ConnectionFactory. And if we're in a multithreaded environment, the API for making sure you get YOUR error information, and not the information for the call that came in immediately after yours, can get complicated. On the other hand, if we return an instance of IDBConnection with an invalid ConnectionState, we can just put all the error information on it, and life is easier.Admin
There's an unfortunate taxonomy at play here - the name Null Object carries the connotation of being "empty" or "useless" - effectively, a typed null value. But there's more to it than that. True, it's often helpful to carry type information along - but the real gain is being able to treat the concept of "invalid" or "empty" as a first-class instance of the object. Consider my points about a database connection in my reply above. Even though the connection may be invalid, there's potentially a lot you can do with an invalid value. Interrogate it for error conditions, etc. Or, in the case of strings, by creating the null string as an actual instance of String, we can compare it to other strings, concatenate it with other strings, dispatch methods on it, etc. without explicitly checking for null values. Plus, as soon as you obtain a null value, you've lost all the runtime type information modern languages put so much effort into attaching to objects. Null effectively breaks your type system, because null isn't type-safe.
In my mind, it comes down to this - not all "invalid" or "empty" states are equivalent, but by committing to a single value to represent invalid states for all types, you're giving them all the same semantics. You can make your code richer and more robust by explicitly codifying the semantics of "invalid," or "empty," or "null" for each of your types as appropriate, instead of using the null catch-all.
Admin
NULL
Additional lines provided to please Askimet
Admin
If you have a type difference between variables that can be null and those which can't, then you know when you need to check for null and when you don't, and the compiler can enforce this. Here is a discussion of how this works with Haskell's Maybe type. With Java (or C# reference types), it's hard to for readers of the code tell whether a variable has been checked for null or not, and frequently impossible for the compiler to know.
Admin
How would interfaces work? extrapolating on our fruit example, would it mean there is a Null Fruit Object even though an actual instance of Fruit is only possible in an implementing class (Orange or Apple)? Do we then still have Apple/Orange Null Objects too?
In your DB examples it would be possible to return a valid object (NULL Object, if you like) even on unsuccessful connection. This doesn't mean that NULL is useless. Perhaps there is some use in a Null Object (I've even used dummy objects for different tasks now I think about it), but I'm not sure that they are necessarily a replacement for NULL. It seems to me that removing NULL in favour of Null Objects is complicated for little benefit.
And (admittedly maybe I'm getting a little off track), what do we do in the case that memory cannot be allocated for an object? How can we represent that we have failed to allocate an Object, when we need an instance of the Object to be able to see this? Does each type of Object that might be used need to have a dummy stored in memory?
I agree that being able to distinguish between different types of errors can be useful (indeed, various mechanisms for this sort of thing appear to have been hacked into systems fairly early on), but they are not always useful anyway.
Maybe a return type of integer from any method, with some parameters passed by value is a good way to address error checking concerns.
Showing a potential use for a Null Object does not necessarily show that non-nullable variables have advantage over nullable ones - and I have no issue with an object being returned in the above examples (with a flag like connected = false), however I don't think that means that we should get rid of null altogether...
Admin
But then neither is as bad as coding if (x == false) followed by an empty set of brackets, then putting all actual code in the "else" clause.
I can make it through whatever life throws at me because I've actually found people who think that's a better solution.
Admin
Readers of code will know if a parameter has been checked for Null if you do it at the top of a method - Sanity checking input (even with non-nullable types, I would think you check their validity as soon as possible). If you mean that you don't knwo whether they've been checked for NULL elsewhere, then I don't think we care. Assuming that checking will happen elsewhere is fraught with danger because later someone might change things that happen elsewhere. This means a lot of checking (as other people have said), but I would think/hope that you still need a lot of checking for validity on Null Objects - which are essentially still just placeholders for our concept of null.
I don't really understand why the compiler needs to check whether you check for null, other than trying to force you to be sensible. Alas, this is still not a problem you get over with Null Objects, because although they can't be null, they still can be invalid (in the context of the program), so should the compiler then be ensuring that some sanity test is done on them? If so, how?
but meh...it's clear that know one here will be able to convince me that forcing non-nullability is a good thing, and it's clear that I won't be able to convince them otherwise, so I'm going home!! BYE!!
Admin
The NULL OBJECT pattern that I know and love doesn't work that way; it's a full object that does what you need it to do, but it has values set to defaults so there won't be null exceptions. For instance, if you were writing a Payroll application and an Employee wasn't found, instead of getting a NullReference (or RecordNotFound or similar) exception, you'd get a "NullEmployee" which inherits from Employee, but has name, title, etc. set to empty string, and calling a method that calculates the pay would be hard-coded to return a 0, because a NullEmployee has no pay. Similarly asking if it's payday for a NullEmployee would always return false, because it's never time to pay a nonexistent employee.
That's the NULL OBJECT pattern that I'm familiar with. Your calling code never has to know that it's dealing with a null entity, just any operation is basically stubbed out.
Admin
The only WTF here is that the submitter thought this was a WTF. (and that he said sense instead of since, and that Alex thought this was a WTF... that worries me)
Rockstar:2, Greg:0, TDWTF:-2
Admin
Admin
You can argue the merits of NULL until you're blue in the face. It doesn't matter because most languages have a concept of NULL. So what's more important is understanding exactly what NULL is and how you work with it.
Someone said NULL indicates uninitialised variables. It varies by language but at a conceptual level that notion is wrong. Consider that, underneath all of the layers of abstraction we've built, everything is just memory. When you define a variable in C, if you don't initialise it then you get whatever is the memory block for that variable. It could be all zeros or (more likely) it's garbage. Setting your variable to NULL explicitly zeros primitive types and pointers.
In an object oriented language like C#, the concept of NULL still applies. The fact that the language and runtime "protect" you from memory doesn't mean NULL is any less significant. Nullable<T> allows developers to define a contract for when NULL values are acceptable and when they're not. I think there's value there, especially when writing a library. But Nullable<T>.HasValue is just syntactic sugar.
Generally, you should always validate your inputs. If you write a function that expects an input to be non-NULL, then you should check for it and return an error code or throw an exception. However, checking for NULL is only one type of validation. If your function expects a string to be 10 characters, then you should check for that too.
There's nothing inherently wrong with the Guard class, but it stinks of ignorance and bad design. You should be checking inputs when it matters, not flailing because you think it's "safe" or "better".
Admin
Zeroes primitive types? What? Did you even try that before you said that? You get at best a compile time warning, and at worst a compile time error. Using NULL as a constant zero is stupid, because it has a type of void *, which is a pointer. Generally speaking, you can coerce pointers into integers, which is why int i = NULL; will work (but the compiler will complain you're converting from a pointer to an integer implicitly). Go ahead, and try doing float i = NULL; here's what GCC has to say about that: error: incompatible types when initializing type ‘float’ using type ‘void *’
Leave C to people who actually know it.
Admin
Candles, oh yeah, that's things with flames at the top, nice one, we can use them to burn all those stupid books.
Or to put it more succinctly: YHBT, YHL. FOAD.
Admin
Greg Mitchell, the real WTF.
Admin
Alright, another concrete example.
Suppose you have a form in which you may type numbers. You may wish to do arithmetic on these numbers (e.g. make the bottom row a total row, the RHS column another set of totals, make the 3rd column the difference between the 1st and 2nd column, whatever).
It can be convenient under these circumstances to model a cell which has not had a value entered into it as a "null cell". It is then convenient and productive to take into account the behaviour of "null cells" when implementing the code to process the arithmetic. This is far easier to maintain than implementing an unentered cell as an actual linguistic "null".
No doubt someone is smugly going to announce that I'm reinventing Excel with this hypothetical situation, but I am just going to ignore them.
Admin
You are completely and absolutely wrong. First of all, note the fact that the code is located within a library. A library can be reused by anyone by just adding a reference. You cannot enforce anything to an external caller. The code can be called directly or via reflection, the external library can be compiled for MS.NET or Mono...
Your library is responsible for validating the input from its caller. An giving out responsible error messages.
Just imagine if any file operation, such as reading the whole file in memory. What would you prefer in case the file name is not valid - generic Win32Exception or FileNotFoundException?
Your 1b point also shows that you do not understand the ArgumentException - it is to inform the caller that he is using the method in a invalid way. You will not catch that exception, instead you will correct it.
And you point #2 is also invalid - the presence of ArgumentNullException shows that the reference is needed by the method. Even if in one case the branching of code might not require the reference, another might. And that branching might not be because of the method arguments but rather some external state.
And now the final example:
The code above throws NullReferenceException when it calls c.Serialize. But the previous two calls have completed. The stream had a bunch of bytes written. Now when the exception interrupts the code, the stream is closed but the file contents are not complete. This is the very purpose of ArgumentException-s - if the method knows it requires the argument to be of certain kind it has to validate it upfront so that the method does not have to include a lot of clean-up code to recover from the NullReferenceException (in this case it would require deletion of the incomplete file).
A similar example in .NET (but perhaps it will get the message over clearer) is the MemoryFailPoint class. This class is used to ensure that your huge operation will have enough memory to complete and not throw OutOfMemoryException in the middle.
Admin
I don't see a problem here. Making use of a helper class to perform argument checking seems like a sound idea to me. Its the closest .NET devs could get to coding to contract before Code Contracts came in
Stephen
Admin
The submission is a WTF.
Admin
Admin
I refuse to read thedailywtf again until this wtf submission is classified as wtf.
Admin
THERE'S SOMETHING WRONG WITH THE CODE.
Good code doesn't let them occur.
Admin
And when it does, would you rather catch it earlier and know exactly which argument was incorrectly null, or wait for it to happen at random times possibly many stack frames later?
Admin
Sir, that's plain disgusting .. hardcoding fail at the heart of your application, making the cpu run around not-calculating fail values is pure nonsense ... the friggin employee does not exist so why even start calculating whatever ???
hardcoding is bad, especially if you have to put some everywhere in your application just in case .. and besides your nullemployee or a null value check return are exactly the same thing, except the nullemployee is by design WRONG.
It's exactly like '<emptystring>' instead of '' ... pointless, useless and way more dirty.
Being afraid of null exceptions to the point where you create buggy hardcoded crap is a WTF.
Admin
However, I'm not particularly impressed by the examples you give. For one thing, working on a null object is a bug, plain and simple. It's a situation that shouldn't arise in the first place. Now we all know that these situations arise more often than you can shake a stick at, and having the notion of Null objects makes sense.
Not knowing what the difference is between SqlConnection and IDBConnection, I don't really see the point. The method works on one or the other type; if it expects an IDBConnection, the additional functionality of SqlConnection are unknown to it.
Also, the argument 'you have to check for null everywhere' doesn't hold. If you have to check for null, you haven't designed your method very well. This isn't C, you know, where you had to check for NULL values. In Java, if you request a database connection and it something goes wrong, you don't get returned a null object: instead, a checked exception is thrown. The programmer MUST handle this exception. So the whole concept of a null reference is moot here. The SQLException will (hopefully!) contain enough information about why this call failed, and there's no such concept of 'having to query the ConnectionFactory' or it being difficult in a multi-threaded environment. Bad example.
Null simply means 'nothing'. As mentioned, the problem is that it's a straight translation of a pointer (which is why it's a NullPointerException), which lacks a type. Databases have null fields, but they're typed (which is why you have to write such awkward code in JDBC when setting a null value). So that would be an argument for a Null object.
However, nobody is stopping you from writing a Null object in Java. Suppose you have this code:
MyObject myObject = new MyObject();
Hey presto, we have our null object. It's a somewhat of a kludge, because there's no way to enforce that you can't write this:
MyObject myObject = null;
but on the other hand, do you need to enforce everything on the developer? Having null is convenient at times, just like having primitives is convenient at times. Arguments have been put forward that both should be removed from Java, just like arrays, but there's a difference between having a language that is beautifully constructed and truly object-oriented, and having a language where certain trade-offs have been made so that you can handle byte[] instead of List<Byte>.
If we have to be held by the hand for everything we develop, we might as well all switch to Logo.
Admin
Or to put it another way, you never know a variable is non-null, and you have to manually null-check in every single method you write. Which is stupid and tedious.
You missed the part where the compiler knows about it. It won't let you pass a null or a nullable object to a method that expects a non-null object. Say you have method a that calls method b that calls method c, and method a takes a nullable parameter. Then you can null-check in method a, and pass a non-null object to method b, and methods b and c don't need to do any null-checking because the object they're passed is guaranteed not to be null. And there's no risk of someone removing the null-check from method a, because the compiler won't let them. You don't use a Null Object for an invalid object - you use a Maybe Valid object, if that's what you have. And of course you have to check such a thing before you use it. But a lot of the time you know your object is definitely valid - and by having a type that represents a definitely valid object, you can avoid checking what you don't need to and make sure you check what you do need to.Admin
Dathan, even if you have no Orange, you can still call Orange's static methods.
Admin
Don't get me wrong: there's nothing wrong with having the compiler check certain things. I'm not complaining about getting ArrayIndexOutOfBoundsException either. And having a runtime check for non-null values makes sense, too.
But I refuse to check parameters passed to my API methods to make sure that the other developer isn't a trained chimp. If I write in the JavaDocs that you must pass a non-null instance of ClassX, you must pass a non-null instance of ClassX. You must not pass an instance of ClassY, except if it's derived from ClassX, and you must most definitely not pass null. If you do pass null, I can guarantee you that you get a NullPointerException at some point in the code.
(Hmm... the previous paragraph reads rather a lot like the Book of Armaments, chapter 2, verses 9-21.)
Admin
I disagree. It is prudent to guard against the users being silly. It adds value to tell them in what way they are being silly. If nothing else, then it removes the cause of some of the interruptions that go along the lines of: "I called your stupid useless method on your stinking class and it through a damned NullPointerException." Using defensive programming techniques may irritate the purists who claim that they should not need to trap out values which are non-compliant with the terms of the API, but it sure does make your class easier to use.
Which would you rather do - spend some extra time ensuring that the arguments coming in adhere to appropriate values, throwing e.g. an IllegalArgumentException complete with an explanatory error message detailing exactly what the client software has done incorrectly, or spend no little time on the telephone smugly explaining in a superior voice that your program is fine, it's the fault of the stupid user? I confess to being in the first category, as I have absolutely no time nor patience with those who prefer to place themselves in the second. Such people can find another place of employment, as they won't last long as employees or paid consultants of mine.
Admin
Can't we just accept that it depends on the situation?
If you're writing a commercially available library thats going to be used by other people (who are possibly not as clever as you), and you want it to be any good, then you ought to use ArgumentNullExceptions properly.
If you're writing something small and fast, which is only ever going to be consumed internally by yourself or your company, then its fine to take shortcuts; the only people who will ever use it know how to use it, and anyone who doesn't know how can sod off.
Admin
Which would you rather do ? Code a whole lot of lines to manage the inherent fail present in most programmers, or code a few lines of faster more efficient more beautiful code which can indeed be used only by those who give a f**k ?
In the end it's all a matter of how much you care about the use of your API in low-quality applications, i.e. the use of your API by more clients.
"easier to use" is not the correct term, it's not easier, it's much worse in terms of code quality /efficiency, BUT it enables really bad programmers to use your API, which is ONLY good from a business point of view (which may be all that matters in quite many cases).
In other words, this whole discussion is pointless, there is no sense in comparing "coding for quality" and "coding for business", the purposes are different and so are the best practices.
Admin
Null does not mean 0
Got that? You're mixing up null and zero, two very different concepts. Let me write it as code:
That's what you get when you start comparing apples and oranges, or in this case, sets of apples and sets of oranges. You can't, at least not in a type-safe language.
Admin
Am I getting this right - checking for null reference and throwing an ArgumentNullException is concidered a WTF, or am I missing something else?
If so, here's my question: When the f*ck else are you supposed to throw System.ArgumentNullException? It exists for a reason?
Admin
Why is he called a RockStar? Does it have something to do with his Big Stones?
Admin
It's like putting ABS on a car. The result is that drivers take more risk, because "the car brakes better". The net result is a far lower decrease in accidents than you might have expected. Defensive driving versus defensive engineering.
And it's not easier to use. It's perhaps a bit easier to track bugs in your own code, but that's not the task of an API. The task of an API is to do whatever it's been designed for, and I never design APIs to train chimps.
I think you could have done without such adjectives like 'smuggly' (OK, that's an adverb) and 'superior'.But you also need to read carefully: I didn't say I never check arguments for validity; I said I don't check arguments for the possibility that the developer using my classes is a trained chimp. If it makes sense to test a parameter, I will. I may even throw an IllegalArgumentException if you need to pass an enum, which is checked inside a switch() block, and it might be null. But that is purely to handle the default case, which one should always do.
However, I very much prefer the sort of design where mistakes are avoided in the first place, instead of having checks everywhere because somebody else might be a lazy sod, or otherwise unfit to be called a developer.
Where this is not feasible, though, I'm not going jump through hoops to do somebody else's job.
Admin
Perhaps a better idea would be to use the assert keyword. Like this:
(Some people might prefer 'assert ( param == null ) == false;' for 'readability', if the preceding conversation is to be believed. PMD doesn't like that either.)
The thing about assertions is that you can disable them at runtime. Since the check is meant to catch the possibility that a trained chimp is using your code, and thus all this should be caught before the application is launched, it looks like the best solution. Bear in mind that NullPointerException precedes assertions in Java.
Admin
And another thing: if you were to persist in calling your less snobbishly intelligent colleagues "chimps", you'd be out on your ear so fast you'd break the sound barrier.
Admin
Admin
I'm not saying the particular approach is redeemable, but there is something to be said for the principle of checking your service method inputs right away, before doing any additional work, and throwing them as a form of IllegalArgumentException or InvalidParameterException describing the invalid parameter that pinpoints the problem. NullPointerExceptions are typically too vague.
In frameworks like spring it can be as simple as a misconfigured bean in development, or an unexpected state at runtime. All too often the natural NullPointerException is thrown on a method where any of several parameters could be null or invalid, or the dao or service is not configured correctly.
I've worked on projects without this approach and projects with it. Those that do make maintenance much easier, even after being away from the code for a while.
Admin
There are days I hate the way my brain works... I read all that discussion of NULL oranges and thought "would a clockwork orange be null or not"? :)
Admin
I don't see any WTF. Guard classes are very common, even Microsft with .NET 4.0 has introduced Code Contracts. Can anybody explain what's wrong in this code?