• mystery (unregistered)

    i am the frist!!!

  • bits (unregistered)

    I minus well not comment on the grammer - it's a mute point.

  • O'Malley (unregistered)

    Frist?

    if (Enum.IsDefined(enumType, value) == false)

    I'm sorry, if you write code like this you're an idiot. No exceptions.

  • Sam (unregistered)

    I'm a java programmer so...

    Checking method parameters for null's and throwing IllegalArgumentException is totally appropriate, and considered more correct than just letting a NullPointerException happen.

    Having a class to assist this doesn't seem like a bad thing to me. Maybe i need to see how Guard (bad name) is used in other classes to understand.

    The real WTF sounds like the reimplemented Collection classes.

  • Super-Anonymoused (unregistered)

    You see this kind of shit a lot from the "TEAM DEFENSIVE PROGRAMMING" people, who are utterly convinced that a NullReferenceException is the spawn of the devil and should NEVER EVAR happen. They should get badges or something.

    We have something very similar in one of our codebases (I honestly fired up the project in question to check someone wasn't posting company code on the internet ;) ), without the following:

    • Code that looks like it's been half-StyleCop'd. I mean, if you're going to put in the useless documentation at least add the {} around if statements.
    • Idiotic string method that throws for an empty string. Yeah, you never want one of those. EVER.

    Unfortunately, we do have some of that (booleanCondition == false) shit going on. Because it's "clearer" apparently. At least it's not (booleanCondition != true).

    Safe to say, I'm happy I don't have to play on that project.

  • Alex (unregistered)

    No WTF to be found here. NullReferenceExceptions might not be thrown until the middle of a method, or not at all, if for example the value is simply stored somewhere. Calling the static methods of the Guard class at the top of the method for each argument will avoid this scenario.

    If you look at Microsoft's implementation of the .NET Framework, they check the passes arguments all the time, instead of just trying to run the method until a null reference is dereferenced and throws an exception.

  • (cs)

    I've come to the unfortunate conclusion that ANYONE who fancies themselves a "rockstar" developer (or "wizard", "sorcerer, "necromancer" and the like) is typically just a clueless hack who's managed to stick it out in a company long enough to where he has tenure to do as he will, usually by hoodwinking technically-inept senior management who don't know or care what "them fancy computer things" do under the hood.

  • coffee (unregistered)

    I'd say they make some fine ERP.

  • Polly Morf (unregistered)

    So he overloaded the NullReferenceException with his own ArgumentException... isn't that what the newfangled inheritance / polymorphism / etc. crap is all about?

  • (cs) in reply to Sam
    Sam:
    I'm a java programmer so...

    Checking method parameters for null's and throwing IllegalArgumentException is totally appropriate, and considered more correct than just letting a NullPointerException happen.

    Having a class to assist this doesn't seem like a bad thing to me. Maybe i need to see how Guard (bad name) is used in other classes to understand.

    The real WTF sounds like the reimplemented Collection classes.

    I'm not a Java programmer, so I don't know how it works there. To me it just seems like a slight semantic difference between a nullreferenceexception occuring naturally when the null parameter is used vs checking and preemptively throwing argumentnullexception for the same purpose. In either case, you end up with an exception and both are correct when they occur at the appropriate time.

    Would make more sense to simply handle the nullreferenceexception or do some other checking elsewhere to prevent it from occuring, etc. Not just cut one exception off with another.

  • (cs) in reply to Alex

    100% in agreement with differentiating a null that is passed in (an argument exception that is the responsibility of the caller) from a a null being de-referenced (the responsibility of the code itself).

  • Sheng Jiang (unregistered)

    This is not defensive programming, an exception is thrown regardless.

    ArgumentNullException is a developer exception, meaning a code contract is violated. In theory, the exception should never occur once the application is complete just like Debug.Assert. But obviously if you are writing a class library product you cannot let the compiler to optimize away the contract enforcement when you release the library. Many people also write this kind of code whenever possible, because another developer may be working on the code base after a few years and may not be familiar with the assumptions of parameter values made by the original author.

    The detailed stack of NullReferenceException is almost useless compared to ArgumentNullException, which has the offending code right at the top of the stack.

  • DonaldK (unregistered) in reply to O'Malley

    Get off your high horse. It's very readable, and can even be the cause of incrementally modified code ( == false might have been == (other tests) at some point in the past.

    Also since when is a stupid ! symbol a better way of denoting negation? That's not legible at all.

  • J (unregistered)

    The submitter writing 'sense' instead of 'since' bothers me more than this code.

    As others have pointed out, letting the NPE escape would be leaking implementation details -- ArgumentException will tell the caller in no uncertain terms that the exception happened because they provided a bad argument. It's a contract thing.

    The attribute stuff... overkill probably. Maybe the author had plans to implement some sort of static analysis using them? I don't know. In the worst case they serve as extra documentation -- and too much documentation is much better than not enough.

    Not a WTF in any way.

  • Macho (unregistered)

    I don't see any since in there

  • Dolphin (unregistered)

    It seems like the poster did not study http://en.wikipedia.org/wiki/Assertion_(computing) at university. I didn't either, but I have picked it up sense :)

  • Marcel Popescu (unregistered)

    Yeah... this sound you've heard? Is today's WTF falling flat. "Fail fast" is a principle that more programmers should learn and use.

  • (cs)

    Null Argument Exceptions Are coming to get you now Null Argument Exceptions Don't bother to ask us how Null Argument Exceptions Just suck 'em up and howl Null Argument Exceptions are [scream:] The Way

    (guitar break)

    And they're not a WTF And they're not a WTF And they're not a WTF Nohow no [scream:] Way ....

  • CustardCat (unregistered) in reply to Dolphin

    Glad some people mentioned Contracts and Assertions.

    I don't know much about C# but in Java if you use Assertions for design by contract programming (and presumably have good unit test coverage) you can check the contracts during the tests and the clever bit is that at runtime the assertions are turned off. The code I've seen that does this uses inner classes which I believe get optimised out of the final class files.

    Not saying it's a good or bad bit of code though ;-)

  • Meep (unregistered) in reply to Dolphin
    Dolphin:
    It seems like the poster did not study http://en.wikipedia.org/wiki/Assertion_(computing) at university. I didn't either, but I have picked it up sense :)

    Now you're making me feel uneducated as I made it through my entire degree without ever studying wikipedia.

  • (cs) in reply to Meep
    Meep:
    Dolphin:
    It seems like the poster did not study http://en.wikipedia.org/wiki/Assertion_(computing) at university. I didn't either, but I have picked it up sense :)

    Now you're making me feel uneducated as I made it through my entire degree without ever studying wikipedia.

    Didn't use Wikipedia? Blasphemy! Everybody who thinks they've got a degree who didn't use Wikipedia to get it ought to have their qualification stripped from them. Euler, Gauss, Hilbert, Cantor - I mean, how smart were they really? Can't have been that smart, they never used Wikipedia.

  • AP2 (unregistered) in reply to Super-Anonymoused
    Super-Anonymoused:
    You see this kind of shit a lot from the "TEAM DEFENSIVE PROGRAMMING" people, who are utterly convinced that a NullReferenceException is the spawn of the devil and should NEVER EVAR happen.

    They're right. It's not by chance that the inventor of the null reference called it the Billion Dollar Mistake (http://qconlondon.com/london-2009/presentation/Null+References:+The+Billion+Dollar+Mistake)

    A decent language does not have null references; if the method might return 'nothing', this should be explicit in the type. Since C# carried on this mistake, it's the programmer's responsibility to verify this pre-condition and fail if it isn't met, not let it fail "somewhere" in the code.

  • (cs)

    Agreed. The ArgumentException bit isn't a WTF (see System.ArgumentNullException). Reimplementing generic containers, on the other hand...

  • m (unregistered) in reply to O'Malley
    O'Malley:
    if (Enum.IsDefined(enumType, value) == false)

    I'm sorry, if you write code like this you're an idiot. No exceptions.

    well,
    if (P == false)
    is actually quite harmless;
    if (P == true)
    is a real problem, for it must actaally be
    if((((…P… == true) == true) == true) == true)
    for style consistency.

  • pedantic (unregistered) in reply to bits
    bits:
    I minus well not comment on the grammer - it's a mute point.
    moot
  • c (unregistered) in reply to Sam
    Sam:
    I'm a java programmer so...

    Checking method parameters for null's and throwing IllegalArgumentException is totally appropriate, and considered more correct than just letting a NullPointerException happen.

    Having a class to assist this doesn't seem like a bad thing to me. Maybe i need to see how Guard (bad name) is used in other classes to understand.

    The real WTF sounds like the reimplemented Collection classes.

    I come from a C background, so I'm too tough for exceptions, but I don't understand how avoiding one exception to throw another can be appropriate.

    Avoiding NullPointerExceptions may well be a noble thing, but surely better to handle the problem than simply raise a different exception. Then again, maybe Java people think differently....

  • Jefffff (unregistered)

    In a language that used the "IF condition THEN statements ELSE statements" construct, we had a curmudgeon who habitually wrote "IF condition ELSE..."

    That's right. No THEN clause.

    When pressed to explain he would merely mutter "for readability" and move on.

    He would even push it so far as "IF NOT condition ELSE ..." to avoid the dreaded THEN. We never got a glimmer of understanding why he found this more "readable".

  • Jimmy Wales (unregistered) in reply to Matt Westwood
    Matt Westwood:
    Meep:
    Dolphin:
    It seems like the poster did not study http://en.wikipedia.org/wiki/Assertion_(computing) at university. I didn't either, but I have picked it up sense :)

    Now you're making me feel uneducated as I made it through my entire degree without ever studying wikipedia.

    Didn't use Wikipedia? Blasphemy! Everybody who thinks they've got a degree who didn't use Wikipedia to get it ought to have their qualification stripped from them. Euler, Gauss, Hilbert, Cantor - I mean, how smart were they really? Can't have been that smart, they never used Wikipedia.

    Use it? Mate, I invented the damn thing!!!

  • Stir the pot...somewhat (unregistered) in reply to O'Malley
    O'Malley:
    Frist?
    if (Enum.IsDefined(enumType, value) == false)

    I'm sorry, if you write code like this you're an idiot. No exceptions.

    why does the fact there are no exceptions in that code make it bad?

  • The point (unregistered) in reply to pedantic

    Eye think ewe mist the point.

  • Sam (unregistered) in reply to c

    The reason you would prefer it is because the developer experiencing the exception may not have access to the original source.

    NullPointerException tells the developer that the library might be broken.

    IllegalArgumentException tells them a required parameter is null.

    If the function is simple and and the source is available, the null exception might not make much difference. But in a complex environment where the source is unavailable or hard to get to, the illegal argument exception gets you the information more quickly and more easily.

    see: http://www.informit.com/articles/article.aspx?p=31551

  • tom103 (unregistered) in reply to c

    A NullReferenceException will only occur when the null reference is used, and it can be far deeper in the call stack. It's much better to preemptively check if the reference is null and throw as early as possible, because it makes the issue much easier to locate

  • (cs) in reply to Sam
    Sam:
    I'm a java programmer so...

    Checking method parameters for null's and throwing IllegalArgumentException is totally appropriate, and considered more correct than just letting a NullPointerException happen.

    Having a class to assist this doesn't seem like a bad thing to me. Maybe i need to see how Guard (bad name) is used in other classes to understand.

    I think it's of a higher priority that you start to understand exceptions. What you're doing is catching an unchecked exception (NullPointerException) and in its stead, throw a different unchecked exception (IllegalArgumentException). Not only is that essentially useless, because it doesn't make applications any more stable, but it's also highly confusing because you're replacing a perfectly good NullPointerException (after all, the parameter is null) by an exception that is designed for other purposes. If anything, throw a NullPointerException directly.

    But that is tainted too. An unchecked exception means that you have a bug in your code. Checking each parameter at the beginning of a method means that you trade performance at runtime for a modicum of expedience during development. That sounds plain daft to me.

  • tom103 (unregistered) in reply to tom103

    (previous comment was an answer to c's comment, I just hit "Reply" instead of "Quote"...)

  • An alternate explanation (unregistered) in reply to AP2

    [The problems with null references].

    The problem with the null references problem is that you can't just remove null references.

    Because how would you for example implement a tree structure if you can't set children to null. (You need to set the children to null at the bottom of the tree).

    I think a good solution would be to have 2 reference types. One which can be null, and one which can't. Then the type of the reference will indicate if it can be null or not. (And this should be verified at compile time).

  • O'Malley (unregistered)
    DonaldK:
    Get off your high horse. It's very readable, and can even be the cause of incrementally modified code ( == false might have been == (other tests) at some point in the past.

    Also since when is a stupid ! symbol a better way of denoting negation? That's not legible at all.

    if (x == false)
    is, I suppose, marginally better than
    if (x == true)
    but I can't say I've ever had any difficulty understanding or noticing the "stupid" ! operator. Something like "if (x == true)" screams out to me that the person doesn't understand what a boolean is.

    This does not apply, of course, to, say, JavaScript's === operator (the presence of falsy values can make it necessary to actually check a value against true or false). It's been a while since I've messed with C# but I'm pretty sure it doesn't have the same issues.

    DonaldK:
    == false might have been == (other tests) at some point in the past

    Is that supposed to be a good thing that it was left that way? If I want to see what it was in the past I think I'd rather use version control.

  • Herby (unregistered) in reply to Matt Westwood
    Matt Westwood:
    Didn't use Wikipedia? Blasphemy! Everybody who thinks they've got a degree who didn't use Wikipedia to get it ought to have their qualification stripped from them. Euler, Gauss, Hilbert, Cantor - I mean, how smart were they really? Can't have been that smart, they never used Wikipedia.

    Yes, there ARE some people who DIDN'T use wikipedia, they were the ones who used this wonderful technology called BOOKS. I believe that they were better for it. They can be used with this other wonderful technology: candles.

  • comrad_question (unregistered) in reply to Sam
    Sam:
    The Real WTF is: I'm a java programmer

    FTFY, amirite?!?

  • Doug (unregistered) in reply to An alternate explanation

    Yes, you are correct. When people say null references should not be allowed, what they really mean is that there should be a strong distinction in the type system between nullable and non-nullable references. In .NET 2.0, they introduced "nullable value types", meaning that you could declare a variable of type bool? which could have three values: true, false, or null. This is really useful in certain circumstances, and is always re-invented (this is similar to the problem that prompted the invention of true/false/FileNotFound, a WTF from a few years ago).

    But just as it is very important to distinguish nullable value types from non-nullable value types, it is also useful to distinguish nullable reference types from non-nullable reference types. If you ask me, you should not be able to assign "object foo = null". That should be expressed instead as "object? foo = null", meaning that foo is nullable. The default "object" type should be non-nullable, and you can only assign null to objects of type "object?". You can always assign from "object" to "object?", but you have to cast if you assign from "object?" to "object", and the cast will throw an exception if the "object?" is null. Then instead of having checks for "if (arg1 == null) throw ArgumentNullException("arg1");" all over the place, you just define arg1 to be of type "object" instead of "object?".

  • yeti (unregistered) in reply to Herby
    Herby:
    Matt Westwood:
    Didn't use Wikipedia? Blasphemy! Everybody who thinks they've got a degree who didn't use Wikipedia to get it ought to have their qualification stripped from them. Euler, Gauss, Hilbert, Cantor - I mean, how smart were they really? Can't have been that smart, they never used Wikipedia.

    Yes, there ARE some people who DIDN'T use wikipedia, they were the ones who used this wonderful technology called BOOKS. I believe that they were better for it. They can be used with this other wonderful technology: candles.

    Is this gonna be an encyclopedia vs wikipedia war like the great war of 2007?

  • CB (unregistered) in reply to c

    An ArgumentException includes the name of the faulty parameter/variable. A NullReferenceException doesn't. If the .pdb files haven't been copied to the production computer then the stack trace won't give the line number where an exception occurred, just the method in which it occurred. With a NullReferenceException you would be left wondering which of many possible references in the method might have been the problem. With an ArgumentException you would know where to start looking.

  • Jimmy (unregistered) in reply to Doug
    Doug:
    Yes, you are correct. When people say null references should not be allowed, what they really mean is that there should be a strong distinction in the type system between nullable and non-nullable references. In .NET 2.0, they introduced "nullable value types", meaning that you could declare a variable of type bool? which could have three values: true, false, or null. This is really useful in certain circumstances, and is always re-invented (this is similar to the problem that prompted the invention of true/false/FileNotFound, a WTF from a few years ago).

    But just as it is very important to distinguish nullable value types from non-nullable value types, it is also useful to distinguish nullable reference types from non-nullable reference types. If you ask me, you should not be able to assign "object foo = null". That should be expressed instead as "object? foo = null", meaning that foo is nullable. The default "object" type should be non-nullable, and you can only assign null to objects of type "object?". You can always assign from "object" to "object?", but you have to cast if you assign from "object?" to "object", and the cast will throw an exception if the "object?" is null. Then instead of having checks for "if (arg1 == null) throw ArgumentNullException("arg1");" all over the place, you just define arg1 to be of type "object" instead of "object?".

    but if you want non-nullable objects you are opening a new can of worms... If objects are not initialised on declaration you have no way of knowing whether they have been initialised or not, other than using a dummy object, which is really much the same as using null.

    Of course, an alternative would be to force initialisation of objects on declaration, but this would certainly cramp my coding style and I think it's not too hard to see how this quickly becomes inconvenient. Think about Data Structures that are normally implemented as nodes (linked lists, binary trees, tries...). These data structures are empty if they have no elements, but how do we represent this without a null? We either create a dummy node that represents no node, or we create a class containing the structure just so we can report if it is empty....the first feels like a kludge, and the second seems to defeat the purpose of such data structure. I don't really understand the need for (nor purpose of) non-nullable types. Although null can cause many headaches, most alternatives would equally cause many headaches. Like with most programming concepts, they can be a problem when abused, but for the most part are useful, perhaps necessary.

  • Dathan (unregistered) in reply to Doug
    Doug:
    Yes, you are correct. When people say null references should not be allowed, what they really mean is that there should be a strong distinction in the type system between nullable and non-nullable references.

    Umm... No. I would assert that when MOST people say "null references should not be allowed," what they really mean is that the literal value null doesn't exist at all in the type system - or at least isn't assignable to ANY reference type. Nullable type systems aren't necessary - there are a number of alternatives that offer some compelling improvements. The Option monad comes to mind, and so does the Null Object pattern. While null is a convenient concept, under most circumstances it's much better for your system to use a type-specific "invalid" value - but only when supported by the type system, e.g., so you don't end up with an integer field that has "-99" to indicate invalid values.

  • Qandra (unregistered) in reply to CB
    CB:
    An ArgumentException includes the name of the faulty parameter/variable. A NullReferenceException doesn't. If the .pdb files haven't been copied to the production computer then the stack trace won't give the line number where an exception occurred, just the method in which it occurred. With a NullReferenceException you would be left wondering which of many possible references in the method might have been the problem. With an ArgumentException you would know where to start looking.
    This sounds like you are looking for issues in your own code. Maybe this is a case for assertions.
  • Dathan (unregistered) in reply to Jimmy
    Jimmy:
    but if you want non-nullable objects you are opening a new can of worms... If objects are not initialised on declaration you have no way of knowing whether they have been initialised or not, other than using a dummy object, which is really much the same as using null.

    Of course, an alternative would be to force initialisation of objects on declaration, but this would certainly cramp my coding style and I think it's not too hard to see how this quickly becomes inconvenient. Think about Data Structures that are normally implemented as nodes (linked lists, binary trees, tries...). These data structures are empty if they have no elements, but how do we represent this without a null? We either create a dummy node that represents no node, or we create a class containing the structure just so we can report if it is empty....the first feels like a kludge, and the second seems to defeat the purpose of such data structure. I don't really understand the need for (nor purpose of) non-nullable types. Although null can cause many headaches, most alternatives would equally cause many headaches. Like with most programming concepts, they can be a problem when abused, but for the most part are useful, perhaps necessary.

    Your description of the problem shows an implicit assumption - that you're working with a low-order type system. Higher-order type systems (e.g., those in most functional languages) can deal quite elegantly with non-nullable types without the initialization woes you describe. Often, the secret sauce is the Maybe or Option monad (in many ways effectively equivalent to Nullable<T> in C#; but more sophisticated).

  • smxlong (unregistered)

    You guys are lunatics. If an argument is null and it must never be null, the sort of thing which has gone wrong is an ARGUMENT exception. Simply letting the thing detonate when the reference is used causes a number of problems.

    1a. You may have made a bunch of changes to program state which must now be unwound. What a freaking pain. 1b. In order to unwind those changes, you must wrap the use of the reference in a try block, what a pain in the ass. 2. The reference may not even be used -- it could simply be stored, only to detonate much later, far from the code which provided it, making for "interesting" debugging.

    Ahh, The Daily WTF. Where people espouse the philosophy of "just let it explode" instead of deliberately enforcing preconditions. A WTF? Certainly.

  • hmmmm (unregistered) in reply to Dathan
    Dathan:
    Doug:
    Yes, you are correct. When people say null references should not be allowed, what they really mean is that there should be a strong distinction in the type system between nullable and non-nullable references.

    Umm... No. I would assert that when MOST people say "null references should not be allowed," what they really mean is that the literal value null doesn't exist at all in the type system - or at least isn't assignable to ANY reference type. Nullable type systems aren't necessary - there are a number of alternatives that offer some compelling improvements. The Option monad comes to mind, and so does the Null Object pattern. While null is a convenient concept, under most circumstances it's much better for your system to use a type-specific "invalid" value - but only when supported by the type system, e.g., so you don't end up with an integer field that has "-99" to indicate invalid values.

    But wouldn't Null Objects just hide the problem, rather than fixing it? A seemingly valid object is passed that doesn't behave as expected because it isn't really a real object. I don't understand how this has any advantage over NULL. If we don't check for NULL, and try to execute a method on it, we have issues. Instead we have an object that (for all intents and purposes) looks like an object we can handle, whose methods we can call, but whose responses are useless and return some default values which might or might not be useful. Essentially we are over-engineering the idea of NULL to create a concept that merely masks some of the issues with NULL - issues which will probably manifest in other ways.

    Think about the real world. If I have 2 oranges and give them both away, I suddenly have nothing. I don't have a special representation of an orange (perhaps just the peel) that can remind me I have nothing, but which I can still trry to manipulate like an orange. I have nothing. This is the same nothing I have when I give away my only 3 apples. That's right, despite these objects being very different, I still end up with the same nothing when I don't have any of them.

    NOTE: This argument is directly against a notion of a Null Object - I won't pretend to understand the Option monad, and perhaps it does somehow offer a reasonable alternative.

  • DG (unregistered)

    hardly a WTF, I mean there are other ways of implementing this, but .NET development would be much better if people used at least something like this

  • (cs) in reply to hmmmm
    hmmmm:
    But wouldn't Null Objects just hide the problem, rather than fixing it?

    That's why you use Option/Maybe/Nullable. The problem is not hidden, it's null that hides the problem. Removing null and replacing it with these objects makes it explicit that a value may not exist, forcing a developer to handle this case.

    Instead of :

    someObjectThatMayBeNull.DoSomething()

    You have:

        if (someObjectThatMayBeNull.HasValue)
            someObjectThatMayBeNull.Value.DoSomething()
        else
            ... explicit handling for this case ...
    
  • Jimmy (unregistered) in reply to Dathan
    Dathan:
    Jimmy:
    but if you want non-nullable objects you are opening a new can of worms... If objects are not initialised on declaration you have no way of knowing whether they have been initialised or not, other than using a dummy object, which is really much the same as using null.

    Of course, an alternative would be to force initialisation of objects on declaration, but this would certainly cramp my coding style and I think it's not too hard to see how this quickly becomes inconvenient. Think about Data Structures that are normally implemented as nodes (linked lists, binary trees, tries...). These data structures are empty if they have no elements, but how do we represent this without a null? We either create a dummy node that represents no node, or we create a class containing the structure just so we can report if it is empty....the first feels like a kludge, and the second seems to defeat the purpose of such data structure. I don't really understand the need for (nor purpose of) non-nullable types. Although null can cause many headaches, most alternatives would equally cause many headaches. Like with most programming concepts, they can be a problem when abused, but for the most part are useful, perhaps necessary.

    Your description of the problem shows an implicit assumption - that you're working with a low-order type system. Higher-order type systems (e.g., those in most functional languages) can deal quite elegantly with non-nullable types without the initialization woes you describe. Often, the secret sauce is the Maybe or Option monad (in many ways effectively equivalent to Nullable<T> in C#; but more sophisticated).

    Can you explain (I'm not being stubbornly argumentative or facetious, I really don't understand)?
    How is it an advantage to have an empty non-null? and how is it implemented so we know it is empty?
    Surely we still have to do some sort of test, which (to my mind) still leaves us in much the same boat.

    Happy to learn otherwise, but I'm not seeing the use in non-nullable types. To me it seems like people are just proposing to have a type-specific null concept (call it and empty object or whatever you will). So we save on null-pointer issues by not having a null, but we still have issues when we assume something has a value when it doesn't really.

Leave a comment on “The Rockstar's Guard”

Log In or post as a guest

Replying to comment #:

« Return to Article