• James Manning (unregistered)

    FWIW, the attributes look like the external annotations for ReSharper. Very useful in my experience.

    http://www.jetbrains.com/resharper/webhelp/Code_Analysis__External_Annotations.html

  • some dude (unregistered) in reply to Marcel Popescu
    Marcel Popescu:
    Yeah... this sound you've heard? Is today's WTF falling flat. "Fail fast" is a principle that more programmers should learn and use.

    Seems like most programmers featured here use the mantra of "Flail fast."

  • hmmmm (unregistered) in reply to mthamil
    mthamil:
    hmmmm:
    But wouldn't Null Objects just hide the problem, rather than fixing it?

    That's why you use Option/Maybe/Nullable. The problem is not hidden, it's null that hides the problem. Removing null and replacing it with these objects makes it explicit that a value may not exist, forcing a developer to handle this case.

    Instead of :

    someObjectThatMayBeNull.DoSomething()

    You have:

        if (someObjectThatMayBeNull.HasValue)
            someObjectThatMayBeNull.Value.DoSomething()
        else
            ... explicit handling for this case ...
    

    This is exactly what I'm struggling to understand... how is your example different to: [code] if(someObjectThatMayBeNull != null) someObjectThatMayBeNull.DoSomething(); else ... explicit handling for this case... [code] Before using it, we have to check that the value is valid one way or another. Although I can see the problems that not checking null will cause in my example, I don't see how they're avoided in your example.

    Maybe I'm missing something. Can you explain what would happen in your example if you didn't check HasValue but still called Value.DoSomething()? I'm assuming the answer is 'nothing' and I think this is worse than a null pointers issue - because we think we have done something and continue. Null pointers can kill my program, but at least that makes it quite clear something is wrong. This non-nullable approach looks like they just ignore issues when something is wrong.

    The obvious answer, is that "good programming dictates that we check that the value is valid...", but this applies equally to NULL.

  • AP2 (unregistered)

    The problem with Java and such is that anything can be a null value; you always have to pollute your code checking each and every argument and return value for nulls, instead of letting the damn type system do its work.

    Maybe and Option offer a way of allowing optional return values without forcing the insanity of null checks everywhere.

  • (cs) in reply to hmmmm
    Maybe I'm missing something. Can you explain what would happen in your example if you didn't check HasValue but still called Value.DoSomething()? I'm assuming the answer is 'nothing' and I think this is worse than a null pointers issue - because we think we have done something and continue. Null pointers can kill my program, but at least that makes it quite clear something is wrong. This non-nullable approach looks like they just ignore issues when something is wrong.

    The obvious answer, is that "good programming dictates that we check that the value is valid...", but this applies equally to NULL.

    The difference is that the type system informs you that a value might not be valid. With null, any reference can be invalid and the type system doesn't help with that. You end up with null checks everywhere. If you add non-nullable types to this, then you know which variables MUST have been initialized to a valid value and which may be invalid. This is more prevalent in functional languages because you usually do not reassign variables.

  • Slicerwizard (unregistered) in reply to J
    J:
    The submitter writing 'sense' instead of 'since' bothers me more than this code.
    It's highly likely that Alex contributed that. He never fails to deliver the goods.
  • hmmmm (unregistered) in reply to AP2
    AP2:
    The problem with Java and such is that anything can be a null value; you always have to pollute your code checking each and every argument and return value for nulls, instead of letting the damn type system do its work.

    Maybe and Option offer a way of allowing optional return values without forcing the insanity of null checks everywhere.

    But surely you still have to check validity. What's the difference between checking nullity everywhere and checking validity everywhere?
    I don't believe you should check validity any less just because you don't use NULL.

  • Gunslinger (unregistered) in reply to Super-Anonymoused
    Super-Anonymoused:
    Unfortunately, we do have some of that (booleanCondition == false) shit going on. Because it's "clearer" apparently. At least it's not (booleanCondition != true).

    What would you rather have them use?

  • AP2 (unregistered) in reply to hmmmm
    hmmmm:
    AP2:
    The problem with Java and such is that anything can be a null value; you always have to pollute your code checking each and every argument and return value for nulls, instead of letting the damn type system do its work.

    Maybe and Option offer a way of allowing optional return values without forcing the insanity of null checks everywhere.

    But surely you still have to check validity. What's the difference between checking nullity everywhere and checking validity everywhere?
    I don't believe you should check validity any less just because you don't use NULL.

    Validity should also be ensured by type system. Simply put, if a function (or method) has an argument of type X, it should return a valid response for every possible value for that argument.

    Of course, this is often impossible in bad type systems like the one Java has, but it should be strive for, since it reduces runtime errors and improves the readability of preconditions.

  • hmmmm (unregistered) in reply to AP2
    AP2:
    hmmmm:
    AP2:
    The problem with Java and such is that anything can be a null value; you always have to pollute your code checking each and every argument and return value for nulls, instead of letting the damn type system do its work.

    Maybe and Option offer a way of allowing optional return values without forcing the insanity of null checks everywhere.

    But surely you still have to check validity. What's the difference between checking nullity everywhere and checking validity everywhere?
    I don't believe you should check validity any less just because you don't use NULL.

    Validity should also be ensured by type system. Simply put, if a function (or method) has an argument of type X, it should return a valid response for every possible value for that argument.

    Of course, this is often impossible in bad type systems like the one Java has, but it should be strive for, since it reduces runtime errors and improves the readability of preconditions.

    Maybe I'm thick, or maybe it's too early in the week, but I still don't get it....

    Checking for NULL or checking for an Empty Variable of a particular type still requires a check to ensure a function returns a valid (or more importantly useful) value. Using an empty variable may ensure that a value is the correct type, but we still need to have some sort of check to make sure that the actual value is valid. To me it seems that by passing a dummy variable of a particular type merely gives us a false sense of confidence - the system may not break, but only because we silently miss the fact that this value is not actually valid. Perhaps I don't understand how these 'NULL Objects' work, but to me it sounds like they're an attempt to avoid an error in places where we want an error to be flagged.

  • Dathan (unregistered) in reply to hmmmm
    hmmmm:
    But wouldn't Null Objects just hide the problem, rather than fixing it? A seemingly valid object is passed that doesn't behave as expected because it isn't really a real object. I don't understand how this has any advantage over NULL. If we don't check for NULL, and try to execute a method on it, we have issues. Instead we have an object that (for all intents and purposes) looks like an object we can handle, whose methods we can call, but whose responses are useless and return some default values which might or might not be useful. Essentially we are over-engineering the idea of NULL to create a concept that merely masks some of the issues with NULL - issues which will probably manifest in other ways.

    Think about the real world. If I have 2 oranges and give them both away, I suddenly have nothing. I don't have a special representation of an orange (perhaps just the peel) that can remind me I have nothing, but which I can still trry to manipulate like an orange. I have nothing. This is the same nothing I have when I give away my only 3 apples. That's right, despite these objects being very different, I still end up with the same nothing when I don't have any of them.

    NOTE: This argument is directly against a notion of a Null Object - I won't pretend to understand the Option monad, and perhaps it does somehow offer a reasonable alternative.

    I disagree with your argument on two points:

    1. The semantics of null in programming are only loosely related with the concept of "Nothing" in the real world. Your example of oranges, I would argue, is closer to a Null Object situation than a true null. If you have an orange, and then you give it away, you still have a concept of the orange. Sort of a notional orange. True, you can't perform operations that require manipulating the true orange - e.g., you can't peel and eat it, or compare its color to that of another orange. But you can still perform certain actions on the notion of that orange. For instance, you can definitively say that your orange wasn't an apple. That's something you CANNOT do with a null value.

    If you have

    SqlConnection conn=null
    you can say that, in this case, conn is intended to be a SqlConnection. However, if we passed it to a method that operated on a supertype of IDBConnection, then within the scope of that method you've lost all information about the original intended type of conn. If, however, we used
    SqlConnection conn = new NullSqlConnection()
    , then we have all the type information we would have with a real, useful SqlConnection instance. And in some circumstances that's important.

    1. You can't really do anything with null. You can't invoke methods on it, or compare it to another instance in any meaningful way. And you're assuming that those same limitations apply to Null Objects - but they don't. Consider, for instance, the empty string. Semantically, it's very different from the null string, right? You can compare it to other strings, concatenate other strings onto it, etc.

    Now, this is a bad example, since the Null Object has a semantic of "invalid", whereas the empty string is clearly a valid string. But consider our SqlConnection from above. What if we had

    IDBConnection connection = ConnectionFactory.CreateConnection()
    ? If the call fails, we have essentially three options: throw an exception, return null, or return a Null Object. If we return null, we've either lost all error information, or we have to obtain it in some way from the ConnectionFactory. And if we're in a multithreaded environment, the API for making sure you get YOUR error information, and not the information for the call that came in immediately after yours, can get complicated. On the other hand, if we return an instance of IDBConnection with an invalid ConnectionState, we can just put all the error information on it, and life is easier.

  • Dathan (unregistered) in reply to Jimmy
    Jimmy:
    Can you explain (I'm not being stubbornly argumentative or facetious, I really don't understand)? How is it an advantage to have an empty non-null? and how is it implemented so we know it is empty? Surely we still have to do some sort of test, which (to my mind) still leaves us in much the same boat.

    Happy to learn otherwise, but I'm not seeing the use in non-nullable types. To me it seems like people are just proposing to have a type-specific null concept (call it and empty object or whatever you will). So we save on null-pointer issues by not having a null, but we still have issues when we assume something has a value when it doesn't really.

    There's an unfortunate taxonomy at play here - the name Null Object carries the connotation of being "empty" or "useless" - effectively, a typed null value. But there's more to it than that. True, it's often helpful to carry type information along - but the real gain is being able to treat the concept of "invalid" or "empty" as a first-class instance of the object. Consider my points about a database connection in my reply above. Even though the connection may be invalid, there's potentially a lot you can do with an invalid value. Interrogate it for error conditions, etc. Or, in the case of strings, by creating the null string as an actual instance of String, we can compare it to other strings, concatenate it with other strings, dispatch methods on it, etc. without explicitly checking for null values. Plus, as soon as you obtain a null value, you've lost all the runtime type information modern languages put so much effort into attaching to objects. Null effectively breaks your type system, because null isn't type-safe.

    In my mind, it comes down to this - not all "invalid" or "empty" states are equivalent, but by committing to a single value to represent invalid states for all types, you're giving them all the same semantics. You can make your code richer and more robust by explicitly codifying the semantics of "invalid," or "empty," or "null" for each of your types as appropriate, instead of using the null catch-all.

  • Herby (unregistered)

    NULL

    Additional lines provided to please Askimet

  • voyou (unregistered) in reply to hmmmm
    But surely you still have to check validity. What's the difference between checking nullity everywhere and checking validity everywhere?

    If you have a type difference between variables that can be null and those which can't, then you know when you need to check for null and when you don't, and the compiler can enforce this. Here is a discussion of how this works with Haskell's Maybe type. With Java (or C# reference types), it's hard to for readers of the code tell whether a variable has been checked for null or not, and frequently impossible for the compiler to know.

  • hmmmm (unregistered) in reply to Dathan
    Dathan:
    hmmmm:
    But wouldn't Null Objects just hide the problem, rather than fixing it? A seemingly valid object is passed that doesn't behave as expected because it isn't really a real object. I don't understand how this has any advantage over NULL. If we don't check for NULL, and try to execute a method on it, we have issues. Instead we have an object that (for all intents and purposes) looks like an object we can handle, whose methods we can call, but whose responses are useless and return some default values which might or might not be useful. Essentially we are over-engineering the idea of NULL to create a concept that merely masks some of the issues with NULL - issues which will probably manifest in other ways.

    Think about the real world. If I have 2 oranges and give them both away, I suddenly have nothing. I don't have a special representation of an orange (perhaps just the peel) that can remind me I have nothing, but which I can still trry to manipulate like an orange. I have nothing. This is the same nothing I have when I give away my only 3 apples. That's right, despite these objects being very different, I still end up with the same nothing when I don't have any of them.

    NOTE: This argument is directly against a notion of a Null Object - I won't pretend to understand the Option monad, and perhaps it does somehow offer a reasonable alternative.

    I disagree with your argument on two points:

    1. The semantics of null in programming are only loosely related with the concept of "Nothing" in the real world. Your example of oranges, I would argue, is closer to a Null Object situation than a true null. If you have an orange, and then you give it away, you still have a concept of the orange. Sort of a notional orange. True, you can't perform operations that require manipulating the true orange - e.g., you can't peel and eat it, or compare its color to that of another orange. But you can still perform certain actions on the notion of that orange. For instance, you can definitively say that your orange wasn't an apple. That's something you CANNOT do with a null value.

    If you have

    SqlConnection conn=null
    you can say that, in this case, conn is intended to be a SqlConnection. However, if we passed it to a method that operated on a supertype of IDBConnection, then within the scope of that method you've lost all information about the original intended type of conn. If, however, we used
    SqlConnection conn = new NullSqlConnection()
    , then we have all the type information we would have with a real, useful SqlConnection instance. And in some circumstances that's important.

    1. You can't really do anything with null. You can't invoke methods on it, or compare it to another instance in any meaningful way. And you're assuming that those same limitations apply to Null Objects - but they don't. Consider, for instance, the empty string. Semantically, it's very different from the null string, right? You can compare it to other strings, concatenate other strings onto it, etc.

    Now, this is a bad example, since the Null Object has a semantic of "invalid", whereas the empty string is clearly a valid string. But consider our SqlConnection from above. What if we had

    IDBConnection connection = ConnectionFactory.CreateConnection()
    ? If the call fails, we have essentially three options: throw an exception, return null, or return a Null Object. If we return null, we've either lost all error information, or we have to obtain it in some way from the ConnectionFactory. And if we're in a multithreaded environment, the API for making sure you get YOUR error information, and not the information for the call that came in immediately after yours, can get complicated. On the other hand, if we return an instance of IDBConnection with an invalid ConnectionState, we can just put all the error information on it, and life is easier.
    But if I don't have an orange, I don't really care about being able to compare it. Suppose instead of giving it away, I never had it. Is there any need for me to be able to hypothesise what an orange might be like if I had one?

    How would interfaces work? extrapolating on our fruit example, would it mean there is a Null Fruit Object even though an actual instance of Fruit is only possible in an implementing class (Orange or Apple)? Do we then still have Apple/Orange Null Objects too?

    In your DB examples it would be possible to return a valid object (NULL Object, if you like) even on unsuccessful connection. This doesn't mean that NULL is useless. Perhaps there is some use in a Null Object (I've even used dummy objects for different tasks now I think about it), but I'm not sure that they are necessarily a replacement for NULL. It seems to me that removing NULL in favour of Null Objects is complicated for little benefit.

    And (admittedly maybe I'm getting a little off track), what do we do in the case that memory cannot be allocated for an object? How can we represent that we have failed to allocate an Object, when we need an instance of the Object to be able to see this? Does each type of Object that might be used need to have a dummy stored in memory?
    I agree that being able to distinguish between different types of errors can be useful (indeed, various mechanisms for this sort of thing appear to have been hacked into systems fairly early on), but they are not always useful anyway.
    Maybe a return type of integer from any method, with some parameters passed by value is a good way to address error checking concerns.

    Showing a potential use for a Null Object does not necessarily show that non-nullable variables have advantage over nullable ones - and I have no issue with an object being returned in the above examples (with a flag like connected = false), however I don't think that means that we should get rid of null altogether...

  • (cs) in reply to O'Malley
    O'Malley:
    if (x == false)
    is, I suppose, marginally better than
    if (x == true)
    but I can't say I've ever had any difficulty understanding or noticing the "stupid" ! operator. Something like "if (x == true)" screams out to me that the person doesn't understand what a boolean is.

    But then neither is as bad as coding if (x == false) followed by an empty set of brackets, then putting all actual code in the "else" clause.

    I can make it through whatever life throws at me because I've actually found people who think that's a better solution.

  • hmmmm (unregistered) in reply to voyou
    voyou:
    But surely you still have to check validity. What's the difference between checking nullity everywhere and checking validity everywhere?

    If you have a type difference between variables that can be null and those which can't, then you know when you need to check for null and when you don't, and the compiler can enforce this. Here is a discussion of how this works with Haskell's Maybe type. With Java (or C# reference types), it's hard to for readers of the code tell whether a variable has been checked for null or not, and frequently impossible for the compiler to know.

    Alternatively, if you don't have non-nullable variables, you always know that you can check for null.

    Readers of code will know if a parameter has been checked for Null if you do it at the top of a method - Sanity checking input (even with non-nullable types, I would think you check their validity as soon as possible). If you mean that you don't knwo whether they've been checked for NULL elsewhere, then I don't think we care. Assuming that checking will happen elsewhere is fraught with danger because later someone might change things that happen elsewhere. This means a lot of checking (as other people have said), but I would think/hope that you still need a lot of checking for validity on Null Objects - which are essentially still just placeholders for our concept of null.

    I don't really understand why the compiler needs to check whether you check for null, other than trying to force you to be sensible. Alas, this is still not a problem you get over with Null Objects, because although they can't be null, they still can be invalid (in the context of the program), so should the compiler then be ensuring that some sanity test is done on them? If so, how?

    but meh...it's clear that know one here will be able to convince me that forcing non-nullability is a good thing, and it's clear that I won't be able to convince them otherwise, so I'm going home!! BYE!!

  • (cs) in reply to hmmmm
    hmmmm:
    Dathan:
    Doug:
    Yes, you are correct. When people say null references should not be allowed, what they really mean is that there should be a strong distinction in the type system between nullable and non-nullable references.

    Umm... No. I would assert that when MOST people say "null references should not be allowed," what they really mean is that the literal value null doesn't exist at all in the type system - or at least isn't assignable to ANY reference type. Nullable type systems aren't necessary - there are a number of alternatives that offer some compelling improvements. The Option monad comes to mind, and so does the Null Object pattern. While null is a convenient concept, under most circumstances it's much better for your system to use a type-specific "invalid" value - but only when supported by the type system, e.g., so you don't end up with an integer field that has "-99" to indicate invalid values.

    But wouldn't Null Objects just hide the problem, rather than fixing it? A seemingly valid object is passed that doesn't behave as expected because it isn't really a real object. I don't understand how this has any advantage over NULL. If we don't check for NULL, and try to execute a method on it, we have issues. Instead we have an object that (for all intents and purposes) looks like an object we can handle, whose methods we can call, but whose responses are useless and return some default values which might or might not be useful. Essentially we are over-engineering the idea of NULL to create a concept that merely masks some of the issues with NULL - issues which will probably manifest in other ways.

    Think about the real world. If I have 2 oranges and give them both away, I suddenly have nothing. I don't have a special representation of an orange (perhaps just the peel) that can remind me I have nothing, but which I can still trry to manipulate like an orange. I have nothing. This is the same nothing I have when I give away my only 3 apples. That's right, despite these objects being very different, I still end up with the same nothing when I don't have any of them.

    NOTE: This argument is directly against a notion of a Null Object - I won't pretend to understand the Option monad, and perhaps it does somehow offer a reasonable alternative.

    The NULL OBJECT pattern that I know and love doesn't work that way; it's a full object that does what you need it to do, but it has values set to defaults so there won't be null exceptions. For instance, if you were writing a Payroll application and an Employee wasn't found, instead of getting a NullReference (or RecordNotFound or similar) exception, you'd get a "NullEmployee" which inherits from Employee, but has name, title, etc. set to empty string, and calling a method that calculates the pay would be hard-coded to return a 0, because a NullEmployee has no pay. Similarly asking if it's payday for a NullEmployee would always return false, because it's never time to pay a nonexistent employee.

    That's the NULL OBJECT pattern that I'm familiar with. Your calling code never has to know that it's dealing with a null entity, just any operation is basically stubbed out.

  • (cs)

    The only WTF here is that the submitter thought this was a WTF. (and that he said sense instead of since, and that Alex thought this was a WTF... that worries me)

    Rockstar:2, Greg:0, TDWTF:-2

  • Bit Head (unregistered) in reply to Gunslinger
    Gunslinger:
    Super-Anonymoused:
    Unfortunately, we do have some of that (booleanCondition == false) shit going on. Because it's "clearer" apparently. At least it's not (booleanCondition != true).

    What would you rather have them use?

    Are you a troll, or just ignorant? At the risk of the former but hoping the latter:
    if (!booleanCondition)
    is much more succinct than
    if (booleanCondition == false)
  • bob (unregistered)

    You can argue the merits of NULL until you're blue in the face. It doesn't matter because most languages have a concept of NULL. So what's more important is understanding exactly what NULL is and how you work with it.

    Someone said NULL indicates uninitialised variables. It varies by language but at a conceptual level that notion is wrong. Consider that, underneath all of the layers of abstraction we've built, everything is just memory. When you define a variable in C, if you don't initialise it then you get whatever is the memory block for that variable. It could be all zeros or (more likely) it's garbage. Setting your variable to NULL explicitly zeros primitive types and pointers.

    In an object oriented language like C#, the concept of NULL still applies. The fact that the language and runtime "protect" you from memory doesn't mean NULL is any less significant. Nullable<T> allows developers to define a contract for when NULL values are acceptable and when they're not. I think there's value there, especially when writing a library. But Nullable<T>.HasValue is just syntactic sugar.

    Generally, you should always validate your inputs. If you write a function that expects an input to be non-NULL, then you should check for it and return an error code or throw an exception. However, checking for NULL is only one type of validation. If your function expects a string to be 10 characters, then you should check for that too.

    There's nothing inherently wrong with the Guard class, but it stinks of ignorance and bad design. You should be checking inputs when it matters, not flailing because you think it's "safe" or "better".

  • get off my lawn (unregistered) in reply to bob
    bob:
    When you define a variable in C, if you don't initialise it then you get whatever is the memory block for that variable. It could be all zeros or (more likely) it's garbage. Setting your variable to NULL explicitly zeros primitive types and pointers.

    Zeroes primitive types? What? Did you even try that before you said that? You get at best a compile time warning, and at worst a compile time error. Using NULL as a constant zero is stupid, because it has a type of void *, which is a pointer. Generally speaking, you can coerce pointers into integers, which is why int i = NULL; will work (but the compiler will complain you're converting from a pointer to an integer implicitly). Go ahead, and try doing float i = NULL; here's what GCC has to say about that: error: incompatible types when initializing type ‘float’ using type ‘void *’

    Leave C to people who actually know it.

  • (cs) in reply to Herby
    Herby:
    Matt Westwood:
    Didn't use Wikipedia? Blasphemy! Everybody who thinks they've got a degree who didn't use Wikipedia to get it ought to have their qualification stripped from them. Euler, Gauss, Hilbert, Cantor - I mean, how smart were they really? Can't have been that smart, they never used Wikipedia.

    Yes, there ARE some people who DIDN'T use wikipedia, they were the ones who used this wonderful technology called BOOKS. I believe that they were better for it. They can be used with this other wonderful technology: candles.

    Candles, oh yeah, that's things with flames at the top, nice one, we can use them to burn all those stupid books.

    Or to put it more succinctly: YHBT, YHL. FOAD.

  • Burt R (unregistered)

    Greg Mitchell, the real WTF.

  • QJo (unregistered) in reply to hmmmm
    hmmmm:
    voyou:
    But surely you still have to check validity. What's the difference between checking nullity everywhere and checking validity everywhere?

    If you have a type difference between variables that can be null and those which can't, then you know when you need to check for null and when you don't, and the compiler can enforce this. Here is a discussion of how this works with Haskell's Maybe type. With Java (or C# reference types), it's hard to for readers of the code tell whether a variable has been checked for null or not, and frequently impossible for the compiler to know.

    Alternatively, if you don't have non-nullable variables, you always know that you can check for null.

    Readers of code will know if a parameter has been checked for Null if you do it at the top of a method - Sanity checking input (even with non-nullable types, I would think you check their validity as soon as possible). If you mean that you don't knwo whether they've been checked for NULL elsewhere, then I don't think we care. Assuming that checking will happen elsewhere is fraught with danger because later someone might change things that happen elsewhere. This means a lot of checking (as other people have said), but I would think/hope that you still need a lot of checking for validity on Null Objects - which are essentially still just placeholders for our concept of null.

    I don't really understand why the compiler needs to check whether you check for null, other than trying to force you to be sensible. Alas, this is still not a problem you get over with Null Objects, because although they can't be null, they still can be invalid (in the context of the program), so should the compiler then be ensuring that some sanity test is done on them? If so, how?

    but meh...it's clear that know one here will be able to convince me that forcing non-nullability is a good thing, and it's clear that I won't be able to convince them otherwise, so I'm going home!! BYE!!

    Alright, another concrete example.

    Suppose you have a form in which you may type numbers. You may wish to do arithmetic on these numbers (e.g. make the bottom row a total row, the RHS column another set of totals, make the 3rd column the difference between the 1st and 2nd column, whatever).

    It can be convenient under these circumstances to model a cell which has not had a value entered into it as a "null cell". It is then convenient and productive to take into account the behaviour of "null cells" when implementing the code to process the arithmetic. This is far easier to maintain than implementing an unentered cell as an actual linguistic "null".

    No doubt someone is smugly going to announce that I'm reinventing Excel with this hypothetical situation, but I am just going to ignore them.

  • Knagis (unregistered) in reply to smxlong
    smxlong:
    You guys are lunatics. If an argument is null and it must never be null, the sort of thing which has gone wrong is an ARGUMENT exception. Simply letting the thing detonate when the reference is used causes a number of problems.

    1a. You may have made a bunch of changes to program state which must now be unwound. What a freaking pain. 1b. In order to unwind those changes, you must wrap the use of the reference in a try block, what a pain in the ass. 2. The reference may not even be used -- it could simply be stored, only to detonate much later, far from the code which provided it, making for "interesting" debugging.

    Ahh, The Daily WTF. Where people espouse the philosophy of "just let it explode" instead of deliberately enforcing preconditions. A WTF? Certainly.

    You are completely and absolutely wrong. First of all, note the fact that the code is located within a library. A library can be reused by anyone by just adding a reference. You cannot enforce anything to an external caller. The code can be called directly or via reflection, the external library can be compiled for MS.NET or Mono...

    Your library is responsible for validating the input from its caller. An giving out responsible error messages.

    Just imagine if any file operation, such as reading the whole file in memory. What would you prefer in case the file name is not valid - generic Win32Exception or FileNotFoundException?

    Your 1b point also shows that you do not understand the ArgumentException - it is to inform the caller that he is using the method in a invalid way. You will not catch that exception, instead you will correct it.

    And you point #2 is also invalid - the presence of ArgumentNullException shows that the reference is needed by the method. Even if in one case the branching of code might not require the reference, another might. And that branching might not be because of the method arguments but rather some external state.

    And now the final example:

    void SaveData(ISaveable a, ISaveable b, ISaveable c)
    {
      using (var stream = CreateFile('t.dat'))
      {
        a.Serialize(stream);
        b.Serialize(stream);
        c.Serialize(stream);
      }
    }
    void Main()
    {
      SaveData(something, something, null);
    }
    

    The code above throws NullReferenceException when it calls c.Serialize. But the previous two calls have completed. The stream had a bunch of bytes written. Now when the exception interrupts the code, the stream is closed but the file contents are not complete. This is the very purpose of ArgumentException-s - if the method knows it requires the argument to be of certain kind it has to validate it upfront so that the method does not have to include a lot of clean-up code to recover from the NullReferenceException (in this case it would require deletion of the incomplete file).

    A similar example in .NET (but perhaps it will get the message over clearer) is the MemoryFailPoint class. This class is used to ensure that your huge operation will have enough memory to complete and not throw OutOfMemoryException in the middle.

  • sroughley (unregistered)

    I don't see a problem here. Making use of a helper class to perform argument checking seems like a sound idea to me. Its the closest .NET devs could get to coding to contract before Code Contracts came in

    Stephen

  • ray (unregistered)

    The submission is a WTF.

  • zirias (unregistered) in reply to Knagis
    Knagis:
    smxlong:
    You guys are lunatics. If an argument is null and it must never be null, the sort of thing which has gone wrong is an ARGUMENT exception. Simply letting the thing detonate when the reference is used causes a number of problems.

    1a. You may have made a bunch of changes to program state which must now be unwound. What a freaking pain. 1b. In order to unwind those changes, you must wrap the use of the reference in a try block, what a pain in the ass. 2. The reference may not even be used -- it could simply be stored, only to detonate much later, far from the code which provided it, making for "interesting" debugging.

    Ahh, The Daily WTF. Where people espouse the philosophy of "just let it explode" instead of deliberately enforcing preconditions. A WTF? Certainly.

    You are completely and absolutely wrong. [...]

    Are you sure about your understanding of your quoted comment?

  • ray (unregistered)

    I refuse to read thedailywtf again until this wtf submission is classified as wtf.

  • JolleSax (unregistered) in reply to shadowman
    shadowman:
    Sam:
    I'm a java programmer so...

    Checking method parameters for null's and throwing IllegalArgumentException is totally appropriate, and considered more correct than just letting a NullPointerException happen.

    Having a class to assist this doesn't seem like a bad thing to me. Maybe i need to see how Guard (bad name) is used in other classes to understand.

    The real WTF sounds like the reimplemented Collection classes.

    I'm not a Java programmer, so I don't know how it works there. To me it just seems like a slight semantic difference between a nullreferenceexception occuring naturally when the null parameter is used vs checking and preemptively throwing argumentnullexception for the same purpose. In either case, you end up with an exception and both are correct when they occur at the appropriate time.

    Would make more sense to simply handle the nullreferenceexception or do some other checking elsewhere to prevent it from occuring, etc. Not just cut one exception off with another.

    Let me reiterate this. A nullreferenceexception tells you there's something wrong with the code.

    THERE'S SOMETHING WRONG WITH THE CODE.

    Good code doesn't let them occur.

  • ray (unregistered) in reply to JolleSax

    And when it does, would you rather catch it earlier and know exactly which argument was incorrectly null, or wait for it to happen at random times possibly many stack frames later?

  • L. (unregistered) in reply to ObiWayneKenobi

    Sir, that's plain disgusting .. hardcoding fail at the heart of your application, making the cpu run around not-calculating fail values is pure nonsense ... the friggin employee does not exist so why even start calculating whatever ???

    hardcoding is bad, especially if you have to put some everywhere in your application just in case .. and besides your nullemployee or a null value check return are exactly the same thing, except the nullemployee is by design WRONG.

    It's exactly like '<emptystring>' instead of '' ... pointless, useless and way more dirty.

    Being afraid of null exceptions to the point where you create buggy hardcoded crap is a WTF.

  • (cs) in reply to Dathan
    Dathan:
    I disagree with your argument on two points: 1) The semantics of null in programming are only loosely related with the concept of "Nothing" in the real world. Your example of oranges, I would argue, is closer to a Null Object situation than a true null. If you have an orange, and then you give it away, you still have a concept of the orange. Sort of a notional orange. True, you can't perform operations that require manipulating the true orange - e.g., you can't peel and eat it, or compare its color to that of another orange. But you can still perform certain actions on the notion of that orange. For instance, you can definitively say that your orange wasn't an apple. That's something you CANNOT do with a null value.

    If you have

    SqlConnection conn=null
    you can say that, in this case, conn is intended to be a SqlConnection. However, if we passed it to a method that operated on a supertype of IDBConnection, then within the scope of that method you've lost all information about the original intended type of conn. If, however, we used
    SqlConnection conn = new NullSqlConnection()
    , then we have all the type information we would have with a real, useful SqlConnection instance. And in some circumstances that's important.

    1. You can't really do anything with null. You can't invoke methods on it, or compare it to another instance in any meaningful way. And you're assuming that those same limitations apply to Null Objects - but they don't. Consider, for instance, the empty string. Semantically, it's very different from the null string, right? You can compare it to other strings, concatenate other strings onto it, etc.

    Now, this is a bad example, since the Null Object has a semantic of "invalid", whereas the empty string is clearly a valid string. But consider our SqlConnection from above. What if we had

    IDBConnection connection = ConnectionFactory.CreateConnection()
    ? If the call fails, we have essentially three options: throw an exception, return null, or return a Null Object. If we return null, we've either lost all error information, or we have to obtain it in some way from the ConnectionFactory. And if we're in a multithreaded environment, the API for making sure you get YOUR error information, and not the information for the call that came in immediately after yours, can get complicated. On the other hand, if we return an instance of IDBConnection with an invalid ConnectionState, we can just put all the error information on it, and life is easier.
    Now I'm not arguing against a Null object, because I know from experience how much of a pain in the behind NullPointerExcptions (I'm a Java developer) are.

    However, I'm not particularly impressed by the examples you give. For one thing, working on a null object is a bug, plain and simple. It's a situation that shouldn't arise in the first place. Now we all know that these situations arise more often than you can shake a stick at, and having the notion of Null objects makes sense.

    Not knowing what the difference is between SqlConnection and IDBConnection, I don't really see the point. The method works on one or the other type; if it expects an IDBConnection, the additional functionality of SqlConnection are unknown to it.

    Also, the argument 'you have to check for null everywhere' doesn't hold. If you have to check for null, you haven't designed your method very well. This isn't C, you know, where you had to check for NULL values. In Java, if you request a database connection and it something goes wrong, you don't get returned a null object: instead, a checked exception is thrown. The programmer MUST handle this exception. So the whole concept of a null reference is moot here. The SQLException will (hopefully!) contain enough information about why this call failed, and there's no such concept of 'having to query the ConnectionFactory' or it being difficult in a multi-threaded environment. Bad example.

    Null simply means 'nothing'. As mentioned, the problem is that it's a straight translation of a pointer (which is why it's a NullPointerException), which lacks a type. Databases have null fields, but they're typed (which is why you have to write such awkward code in JDBC when setting a null value). So that would be an argument for a Null object.

    However, nobody is stopping you from writing a Null object in Java. Suppose you have this code:

    MyObject myObject = new MyObject();

    Hey presto, we have our null object. It's a somewhat of a kludge, because there's no way to enforce that you can't write this:

    MyObject myObject = null;

    but on the other hand, do you need to enforce everything on the developer? Having null is convenient at times, just like having primitives is convenient at times. Arguments have been put forward that both should be removed from Java, just like arrays, but there's a difference between having a language that is beautifully constructed and truly object-oriented, and having a language where certain trade-offs have been made so that you can handle byte[] instead of List<Byte>.

    If we have to be held by the hand for everything we develop, we might as well all switch to Logo.

  • lmm (unregistered) in reply to hmmmm
    hmmmm:
    Alternatively, if you don't have non-nullable variables, you always know that you can check for null.

    Or to put it another way, you never know a variable is non-null, and you have to manually null-check in every single method you write. Which is stupid and tedious.

    Readers of code will know if a parameter has been checked for Null if you do it at the top of a method - Sanity checking input (even with non-nullable types, I would think you check their validity as soon as possible). If you mean that you don't knwo whether they've been checked for NULL elsewhere, then I don't think we care. Assuming that checking will happen elsewhere is fraught with danger because later someone might change things that happen elsewhere.
    You missed the part where the compiler knows about it. It won't let you pass a null or a nullable object to a method that expects a non-null object. Say you have method a that calls method b that calls method c, and method a takes a nullable parameter. Then you can null-check in method a, and pass a non-null object to method b, and methods b and c don't need to do any null-checking because the object they're passed is guaranteed not to be null. And there's no risk of someone removing the null-check from method a, because the compiler won't let them.
    This means a lot of checking (as other people have said), but I would think/hope that you still need a lot of checking for validity on Null Objects - which are essentially still just placeholders for our concept of null.
    You don't use a Null Object for an invalid object - you use a Maybe Valid object, if that's what you have. And of course you have to check such a thing before you use it. But a lot of the time you know your object is definitely valid - and by having a type that represents a definitely valid object, you can avoid checking what you don't need to and make sure you check what you do need to.
  • Shinobu (unregistered) in reply to hmmmm
    hmmmm:
    Think about the real world. If I have 2 oranges and give them both away, I suddenly have nothing. I don't have a special representation of an orange (perhaps just the peel) that can remind me I have nothing, but which I can still trry to manipulate like an orange. I have nothing. This is the same nothing I have when I give away my only 3 apples. That's right, despite these objects being very different, I still end up with the same nothing when I don't have any of them.
    Actually, what you had was a Collection of 2 Oranges, and after you gave them away, you still had a valid Collection. You can for example invoke the Count or Add methods.

    Dathan, even if you have no Orange, you can still call Orange's static methods.

  • (cs) in reply to JolleSax
    JolleSax:
    Let me reiterate this. A nullreferenceexception tells you there's something wrong with the code.

    THERE'S SOMETHING WRONG WITH THE CODE.

    Good code doesn't let them occur.

    Exactly. We've reached the point that 'getting a NullPointerException means bad code' morphed into 'having NullPointerExceptions is bad' and finally into 'NullPointerExceptions are bad', so that solutions are sought to get rid of these 'bad' NullPointerExceptions, which are caused by 'bad' null objects.

    Don't get me wrong: there's nothing wrong with having the compiler check certain things. I'm not complaining about getting ArrayIndexOutOfBoundsException either. And having a runtime check for non-null values makes sense, too.

    But I refuse to check parameters passed to my API methods to make sure that the other developer isn't a trained chimp. If I write in the JavaDocs that you must pass a non-null instance of ClassX, you must pass a non-null instance of ClassX. You must not pass an instance of ClassY, except if it's derived from ClassX, and you must most definitely not pass null. If you do pass null, I can guarantee you that you get a NullPointerException at some point in the code.

    (Hmm... the previous paragraph reads rather a lot like the Book of Armaments, chapter 2, verses 9-21.)

  • QJo (unregistered) in reply to Severity One
    Severity One:
    JolleSax:
    Let me reiterate this. A nullreferenceexception tells you there's something wrong with the code.

    THERE'S SOMETHING WRONG WITH THE CODE.

    Good code doesn't let them occur.

    Exactly. We've reached the point that 'getting a NullPointerException means bad code' morphed into 'having NullPointerExceptions is bad' and finally into 'NullPointerExceptions are bad', so that solutions are sought to get rid of these 'bad' NullPointerExceptions, which are caused by 'bad' null objects.

    Don't get me wrong: there's nothing wrong with having the compiler check certain things. I'm not complaining about getting ArrayIndexOutOfBoundsException either. And having a runtime check for non-null values makes sense, too.

    But I refuse to check parameters passed to my API methods to make sure that the other developer isn't a trained chimp. If I write in the JavaDocs that you must pass a non-null instance of ClassX, you must pass a non-null instance of ClassX. You must not pass an instance of ClassY, except if it's derived from ClassX, and you must most definitely not pass null. If you do pass null, I can guarantee you that you get a NullPointerException at some point in the code.

    (Hmm... the previous paragraph reads rather a lot like the Book of Armaments, chapter 2, verses 9-21.)

    I disagree. It is prudent to guard against the users being silly. It adds value to tell them in what way they are being silly. If nothing else, then it removes the cause of some of the interruptions that go along the lines of: "I called your stupid useless method on your stinking class and it through a damned NullPointerException." Using defensive programming techniques may irritate the purists who claim that they should not need to trap out values which are non-compliant with the terms of the API, but it sure does make your class easier to use.

    Which would you rather do - spend some extra time ensuring that the arguments coming in adhere to appropriate values, throwing e.g. an IllegalArgumentException complete with an explanatory error message detailing exactly what the client software has done incorrectly, or spend no little time on the telephone smugly explaining in a superior voice that your program is fine, it's the fault of the stupid user? I confess to being in the first category, as I have absolutely no time nor patience with those who prefer to place themselves in the second. Such people can find another place of employment, as they won't last long as employees or paid consultants of mine.

  • eVil (unregistered) in reply to Severity One
    Severity One:
    JolleSax:
    Let me reiterate this. A nullreferenceexception tells you there's something wrong with the code.

    THERE'S SOMETHING WRONG WITH THE CODE.

    Good code doesn't let them occur.

    Exactly. We've reached the point that 'getting a NullPointerException means bad code' morphed into 'having NullPointerExceptions is bad' and finally into 'NullPointerExceptions are bad', so that solutions are sought to get rid of these 'bad' NullPointerExceptions, which are caused by 'bad' null objects.

    Don't get me wrong: there's nothing wrong with having the compiler check certain things. I'm not complaining about getting ArrayIndexOutOfBoundsException either. And having a runtime check for non-null values makes sense, too.

    But I refuse to check parameters passed to my API methods to make sure that the other developer isn't a trained chimp. If I write in the JavaDocs that you must pass a non-null instance of ClassX, you must pass a non-null instance of ClassX. You must not pass an instance of ClassY, except if it's derived from ClassX, and you must most definitely not pass null. If you do pass null, I can guarantee you that you get a NullPointerException at some point in the code.

    (Hmm... the previous paragraph reads rather a lot like the Book of Armaments, chapter 2, verses 9-21.)

    Can't we just accept that it depends on the situation?

    If you're writing a commercially available library thats going to be used by other people (who are possibly not as clever as you), and you want it to be any good, then you ought to use ArgumentNullExceptions properly.

    If you're writing something small and fast, which is only ever going to be consumed internally by yourself or your company, then its fine to take shortcuts; the only people who will ever use it know how to use it, and anyone who doesn't know how can sod off.

  • L. (unregistered) in reply to QJo
    QJo:
    Severity One:
    JolleSax:
    Let me reiterate this. A nullreferenceexception tells you there's something wrong with the code.

    THERE'S SOMETHING WRONG WITH THE CODE.

    Good code doesn't let them occur.

    Exactly. We've reached the point that 'getting a NullPointerException means bad code' morphed into 'having NullPointerExceptions is bad' and finally into 'NullPointerExceptions are bad', so that solutions are sought to get rid of these 'bad' NullPointerExceptions, which are caused by 'bad' null objects.

    Don't get me wrong: there's nothing wrong with having the compiler check certain things. I'm not complaining about getting ArrayIndexOutOfBoundsException either. And having a runtime check for non-null values makes sense, too.

    But I refuse to check parameters passed to my API methods to make sure that the other developer isn't a trained chimp. If I write in the JavaDocs that you must pass a non-null instance of ClassX, you must pass a non-null instance of ClassX. You must not pass an instance of ClassY, except if it's derived from ClassX, and you must most definitely not pass null. If you do pass null, I can guarantee you that you get a NullPointerException at some point in the code.

    (Hmm... the previous paragraph reads rather a lot like the Book of Armaments, chapter 2, verses 9-21.)

    I disagree. It is prudent to guard against the users being silly. It adds value to tell them in what way they are being silly. If nothing else, then it removes the cause of some of the interruptions that go along the lines of: "I called your stupid useless method on your stinking class and it through a damned NullPointerException." Using defensive programming techniques may irritate the purists who claim that they should not need to trap out values which are non-compliant with the terms of the API, but it sure does make your class easier to use.

    Which would you rather do - spend some extra time ensuring that the arguments coming in adhere to appropriate values, throwing e.g. an IllegalArgumentException complete with an explanatory error message detailing exactly what the client software has done incorrectly, or spend no little time on the telephone smugly explaining in a superior voice that your program is fine, it's the fault of the stupid user? I confess to being in the first category, as I have absolutely no time nor patience with those who prefer to place themselves in the second. Such people can find another place of employment, as they won't last long as employees or paid consultants of mine.

    Which would you rather do ? Code a whole lot of lines to manage the inherent fail present in most programmers, or code a few lines of faster more efficient more beautiful code which can indeed be used only by those who give a f**k ?

    In the end it's all a matter of how much you care about the use of your API in low-quality applications, i.e. the use of your API by more clients.

    "easier to use" is not the correct term, it's not easier, it's much worse in terms of code quality /efficiency, BUT it enables really bad programmers to use your API, which is ONLY good from a business point of view (which may be all that matters in quite many cases).

    In other words, this whole discussion is pointless, there is no sense in comparing "coding for quality" and "coding for business", the purposes are different and so are the best practices.

  • (cs) in reply to hmmmm
    hmmmm:
    Think about the real world. If I have 2 oranges and give them both away, I suddenly have nothing. I don't have a special representation of an orange (perhaps just the peel) that can remind me I have nothing, but which I can still trry to manipulate like an orange. I have nothing. This is the same nothing I have when I give away my only 3 apples. That's right, despite these objects being very different, I still end up with the same nothing when I don't have any of them.
    Repeat after me:

    Null does not mean 0

    Got that? You're mixing up null and zero, two very different concepts. Let me write it as code:

    Collection<Orange> oranges = new ArrayList<Orange>();
    oranges.add( new Orange() );
    oranges.add( new Orange() );
    
    Collection<Apple> apples = new ArrayList<Apple>();
    apples.add( new Apple() );
    apples.add( new Apple() );
    apples.add( new Apple() );
    
    oranges.clear();
    apples.clear();
    
    System.out.println( oranges.equals( apples ) );

    That's what you get when you start comparing apples and oranges, or in this case, sets of apples and sets of oranges. You can't, at least not in a type-safe language.

  • midas (unregistered)

    Am I getting this right - checking for null reference and throwing an ArgumentNullException is concidered a WTF, or am I missing something else?

    If so, here's my question: When the f*ck else are you supposed to throw System.ArgumentNullException? It exists for a reason?

  • (cs)

    Why is he called a RockStar? Does it have something to do with his Big Stones?

  • (cs) in reply to QJo
    QJo:
    Severity One:
    But I refuse to check parameters passed to my API methods to make sure that the other developer isn't a trained chimp. If I write in the JavaDocs that you must pass a non-null instance of ClassX, you must pass a non-null instance of ClassX. You must not pass an instance of ClassY, except if it's derived from ClassX, and you must most definitely not pass null. If you do pass null, I can guarantee you that you get a NullPointerException at some point in the code.
    I disagree. It is prudent to guard against the users being silly. It adds value to tell them in what way they are being silly. If nothing else, then it removes the cause of some of the interruptions that go along the lines of: "I called your stupid useless method on your stinking class and it through a damned NullPointerException."
    Trust me: nobody at work talks to me in a such a way.
    Using defensive programming techniques may irritate the purists who claim that they should not need to trap out values which are non-compliant with the terms of the API, but it sure does make your class easier to use.
    It has nothing to do with being purist. It has everything to do with me not being a school teacher. You may call it defensive programming; I call it not trusting other programmers. It's not my fault that he cannot ensure that he isn't passing a null object, so it's not my responsibility either to make sure he doesn't.

    It's like putting ABS on a car. The result is that drivers take more risk, because "the car brakes better". The net result is a far lower decrease in accidents than you might have expected. Defensive driving versus defensive engineering.

    And it's not easier to use. It's perhaps a bit easier to track bugs in your own code, but that's not the task of an API. The task of an API is to do whatever it's been designed for, and I never design APIs to train chimps.

    Which would you rather do - spend some extra time ensuring that the arguments coming in adhere to appropriate values, throwing e.g. an IllegalArgumentException complete with an explanatory error message detailing exactly what the client software has done incorrectly, or spend no little time on the telephone smugly explaining in a superior voice that your program is fine, it's the fault of the stupid user?
    I think you could have done without such adjectives like 'smuggly' (OK, that's an adverb) and 'superior'.

    But you also need to read carefully: I didn't say I never check arguments for validity; I said I don't check arguments for the possibility that the developer using my classes is a trained chimp. If it makes sense to test a parameter, I will. I may even throw an IllegalArgumentException if you need to pass an enum, which is checked inside a switch() block, and it might be null. But that is purely to handle the default case, which one should always do.

    However, I very much prefer the sort of design where mistakes are avoided in the first place, instead of having checks everywhere because somebody else might be a lazy sod, or otherwise unfit to be called a developer.

    Where this is not feasible, though, I'm not going jump through hoops to do somebody else's job.

  • (cs) in reply to midas
    midas:
    Am I getting this right - checking for null reference and throwing an ArgumentNullException is concidered a WTF, or am I missing something else?
    No, you're not getting it right. Checking for null and throwing an IllegalArgumentException (instead of a NullPointerException) is a WTF, though.
    If so, here's my question: When the f*ck else are you supposed to throw System.ArgumentNullException? It exists for a reason?
    Of course it does, but at least to PMD (a Java source code analyser) it is a code smell.

    Perhaps a better idea would be to use the assert keyword. Like this:

    assert param != null;

    (Some people might prefer 'assert ( param == null ) == false;' for 'readability', if the preceding conversation is to be believed. PMD doesn't like that either.)

    The thing about assertions is that you can disable them at runtime. Since the check is meant to catch the possibility that a trained chimp is using your code, and thus all this should be caught before the application is launched, it looks like the best solution. Bear in mind that NullPointerException precedes assertions in Java.

  • QJo (unregistered) in reply to Severity One
    Severity One:
    QJo:
    Severity One:
    But I refuse to check parameters passed to my API methods to make sure that the other developer isn't a trained chimp. If I write in the JavaDocs that you must pass a non-null instance of ClassX, you must pass a non-null instance of ClassX. You must not pass an instance of ClassY, except if it's derived from ClassX, and you must most definitely not pass null. If you do pass null, I can guarantee you that you get a NullPointerException at some point in the code.
    I disagree. It is prudent to guard against the users being silly. It adds value to tell them in what way they are being silly. If nothing else, then it removes the cause of some of the interruptions that go along the lines of: "I called your stupid useless method on your stinking class and it through a damned NullPointerException."
    Trust me: nobody at work talks to me in a such a way.
    Using defensive programming techniques may irritate the purists who claim that they should not need to trap out values which are non-compliant with the terms of the API, but it sure does make your class easier to use.
    It has nothing to do with being purist. It has everything to do with me not being a school teacher. You may call it defensive programming; I call it not trusting other programmers. It's not my fault that he cannot ensure that he isn't passing a null object, so it's not my responsibility either to make sure he doesn't.

    It's like putting ABS on a car. The result is that drivers take more risk, because "the car brakes better". The net result is a far lower decrease in accidents than you might have expected. Defensive driving versus defensive engineering.

    And it's not easier to use. It's perhaps a bit easier to track bugs in your own code, but that's not the task of an API. The task of an API is to do whatever it's been designed for, and I never design APIs to train chimps.

    Which would you rather do - spend some extra time ensuring that the arguments coming in adhere to appropriate values, throwing e.g. an IllegalArgumentException complete with an explanatory error message detailing exactly what the client software has done incorrectly, or spend no little time on the telephone smugly explaining in a superior voice that your program is fine, it's the fault of the stupid user?
    I think you could have done without such adjectives like 'smuggly' (OK, that's an adverb) and 'superior'.

    But you also need to read carefully: I didn't say I never check arguments for validity; I said I don't check arguments for the possibility that the developer using my classes is a trained chimp. If it makes sense to test a parameter, I will. I may even throw an IllegalArgumentException if you need to pass an enum, which is checked inside a switch() block, and it might be null. But that is purely to handle the default case, which one should always do.

    However, I very much prefer the sort of design where mistakes are avoided in the first place, instead of having checks everywhere because somebody else might be a lazy sod, or otherwise unfit to be called a developer.

    Where this is not feasible, though, I'm not going jump through hoops to do somebody else's job.

    And another thing: if you were to persist in calling your less snobbishly intelligent colleagues "chimps", you'd be out on your ear so fast you'd break the sound barrier.

  • Z00n3$!$ (unregistered) in reply to Slicerwizard
    Slicerwizard:
    It's highly likely that Alex contributed that. He never fails to deliver the goods.
    He's always left me satisfied and smiling...
  • ledlogic (unregistered)

    I'm not saying the particular approach is redeemable, but there is something to be said for the principle of checking your service method inputs right away, before doing any additional work, and throwing them as a form of IllegalArgumentException or InvalidParameterException describing the invalid parameter that pinpoints the problem. NullPointerExceptions are typically too vague.

    In frameworks like spring it can be as simple as a misconfigured bean in development, or an unexpected state at runtime. All too often the natural NullPointerException is thrown on a method where any of several parameters could be null or invalid, or the dao or service is not configured correctly.

    I've worked on projects without this approach and projects with it. Those that do make maintenance much easier, even after being away from the code for a while.

  • (cs) in reply to Dathan

    There are days I hate the way my brain works... I read all that discussion of NULL oranges and thought "would a clockwork orange be null or not"? :)

  • Paul (unregistered)

    I don't see any WTF. Guard classes are very common, even Microsft with .NET 4.0 has introduced Code Contracts. Can anybody explain what's wrong in this code?

Leave a comment on “The Rockstar's Guard”

Log In or post as a guest

Replying to comment #:

« Return to Article