Testing Done Right

  • Power Troll 2011-03-22 10:24
    Oh my. frits?

    Anyway, while I agree that 100% code coverage is meaningless when test defects exist, is it simply a gestalt "feeling" about when your code is good to go, or what?
  • Ed 2011-03-22 10:27
    Someone needs to explain that last bit to my boss. Badly.
  • Anonymous 2011-03-22 10:34
    Damn, that's a nice fridge. Anyway, I agree with the sentiment and I always unit test functionality that cannot be adequately exercised by an end user. But everything else I leave to the chimps in the testing department. Why make senior developers do the work of a trained monkey? There's a reason we get paid twice as much as them, you know.
  • SpasticWeasel 2011-03-22 10:38
    So Plato spoke latin huh? It was Juvenal.
  • Studley 2011-03-22 10:44
    Just checking whether I also need to type in "
    " to get a linebreak in my comment.
  • Oh God It Hurts 2011-03-22 11:03
    "At best (i.e., with unlimited resources), you can be 99.999…% confident that there will be no defects in production."

    So, 100% confident then.
  • dadasd 2011-03-22 11:03
    One real WTF is the number of developers (yes, 341777, I'm looking at you) who still think unit testing is a testing technique, rather than a programming one.
  • boog 2011-03-22 11:04
    Change Impact – the estimated scope of a given change; this varies from change to change and, like testing, is always a “good enough” estimate, as even a seemingly simple change could bring down the entire system

    I think about this every time somebody tells me to "just refactor".
  • sd 2011-03-22 11:04
    SpasticWeasel:
    So Plato spoke latin huh? It was Juvenal.


    No, you're juvenile!
  • Tim 2011-03-22 11:07
    Even if you have coverage of 100% of the lines of code that doesn't mean you have covered 100% of the code paths. In fact, since the number of code paths through the entire code base grows exponentially you have covered some vanishingly small percentage of the code paths. For example, it is pretty easy to get 100% coverage of the following lines without uncovering the null pointer exception

    if (x <= 0) {
    obj = NULL;
    }
    if (x >= 0) {
    obj.doSomething();
    }
  • Maurizio 2011-03-22 11:11
    I have a problem with this:
    > Integration Testing – does the program function?

    What does "the program function" means ? Doesn't crash ? That is easy. But what if just behave differently from expected ?
    Than, what is expected ? What is expected is define in the functional specifications, so what is the real difference between integration and functional testing ?

    My personal definition, that i am using in my work in a big IT departement, is that integration test verify that a codebase correspond to a technical design, i.e. that the different modules interacts as the architect/developer decided, while the functional tests verify that the design and the code actually correspond to the functional requirements.

    Opinions ?
  • Zelda 2011-03-22 11:12
    As a QA Monkey I should be offended, but then I realized that programmers use all their salary to buy dates while I use mine for bananas.
  • frits 2011-03-22 11:12
    Power Troll:
    Oh my. frits?

    Anyway, while I agree that 100% code coverage is meaningless when test defects exist, is it simply a gestalt "feeling" about when your code is good to go, or what?

    You rang?
  • boog 2011-03-22 11:13
    dadasd:
    One real WTF is the number of developers (yes, 341777, I'm looking at you) who still think unit testing is a testing technique, rather than a programming one.
    Why can't it be both?
  • The Ancient Programmer 2011-03-22 11:18
    Ed:
    Someone needs to explain that last bit to my boss. Badly.


    Why explain it to him badly?
  • boog 2011-03-22 11:22
    Anonymous:
    Anyway, I agree with the sentiment and I always unit test functionality that cannot be adequately exercised by an end user. But everything else I leave to the chimps in the testing department. Why make senior developers do the work of a trained monkey?

    I've found that as a developer, I'd much prefer a testing department comprised of smart humans with strong technical skills, rather than chimps. With technical testers, you get surprisingly-useful reports about defects and failing tests, making it much easier to identify the problem and correct it.

    With trained monkeys, you only get phone calls saying "program no work; fix it". The flying poo can become a bit of a problem too.
  • Hatshepsut 2011-03-22 11:22
    An interesting definition of Acceptance Testing ("formal or informal testing to ensure that functional requirements as implemented are valid and meet the business need").

    Where I come from, Acceptance Testing verifies that the delivered product meets the contractually agreed requirements. Verification that these requirements meet the business need is a separate matter.

    Also interesting is the designation of functional and non-functional requirements - a concept that seems to come from a strongly data-processing oriented background, rather than, say, a system-control oriented background, in which performance is often a critical functional requirement.
  • dohpaz42 2011-03-22 11:24
    Maurizio:
    I have a problem with this:
    > Integration Testing – does the program function?

    What does "the program function" means ? Doesn't crash ? That is easy. But what if just behave differently from expected ?
    Than, what is expected ? What is expected is define in the functional specifications, so what is the real difference between integration and functional testing ?

    My personal definition, that i am using in my work in a big IT departement, is that integration test verify that a codebase correspond to a technical design, i.e. that the different modules interacts as the architect/developer decided, while the functional tests verify that the design and the code actually correspond to the functional requirements.

    Opinions ?


    Testing is such a subjective area of programming, that it means different things to different people/businesses/departments. The question, "Does the program function?" could mean "Does it crash?", or "Does it do XYZ, ZYX, or some other functionality?"

    Before you test, you first have to define your tests. But, to define your tests, you have to define what is a reasonable outcome for the test you want to write. Is there more than one way to access a piece of code? Does it produce more than one type of output - or any output, for that matter?

    Testing is almost like software design; you have to sit down and plan out what you want to test, and how you're going to implement it. Which, unfortunately, to most business types is a 100% (or 99.(9)%) waste of time and effort.
  • Jon 2011-03-22 11:24
    I'm throwing a WtfNotFunny exception.
  • neminem 2011-03-22 11:25
    Tim:
    In fact, since the number of code paths through the entire code base grows exponentially you have covered some vanishingly small percentage of the code paths.

    I'd argue anytime your input comes from the user or from another system you have no control over the output of, you've covered exactly 0%. A finite fraction of an infinite is, after all, precisely nothing. (We once found a major blocking bug in a process iff the text input to the process started with a lowercase 'p'. That was a fun one.)

    But I'm happy to work at a computer that distinguishes between devs and testers; we certainly are responsible for testing the effects of our own code before checking in changes (that would be your #1), but the "test team" consists of people that were hired for that purpose. I sort of thought it was like that everywhere (well, everywhere where the whole company isn't a couple devs.)
  • frits 2011-03-22 11:26
    The 100% code coverage metric is so dumb. Have these people never heard of permutation? I assume that 100% of the code is tested at least informally. I mean do some developers really write code that they've never run before releasing it?
  • Hatshepsut 2011-03-22 11:28
    Anonymous:
    Damn, that's a nice fridge.


    Shame about the furniture.

    And the clouds on the ceiling.

  • JadedTester 2011-03-22 11:30
    I mean do some developers really write code that they've never run before releasing it?


    Yes. Unfortunately, the amount of testing a developer does tends to have a negative correlation with the amount of testing their code actually needs. There's usually no clearer sign of a developer who belongs on the dole queue than one who thinks their code will work first time.
  • Anonymous 2011-03-22 11:32
    boog:
    Anonymous:
    Anyway, I agree with the sentiment and I always unit test functionality that cannot be adequately exercised by an end user. But everything else I leave to the chimps in the testing department. Why make senior developers do the work of a trained monkey?

    I've found that as a developer, I'd much prefer a testing department comprised of smart humans with strong technical skills, rather than chimps. With technical testers, you get surprisingly-useful reports about defects and failing tests, making it much easier to identify the problem and correct it.

    With trained monkeys, you only get phone calls saying "program no work; fix it". The flying poo can become a bit of a problem too.

    I agree, I'd love to work with technically-minded testers, but I am begrudgingly speaking from experience. We only have one capable tester, the rest are rejects who only test because they don't have the skill to do anything else in IT. They sit at their desks grinding the organ according to the pre-defined scripts authored by the one guy that actually knows what he's doing. The monkey analogy is all too accurate, to be honest.
  • Exception? 2011-03-22 11:34
    Jon:
    I'm throwing a WtfNotFunny exception.

    Implying that this is somehow an exception to the rule.

    Bazzzzing!
  • JadedTester 2011-03-22 11:39
    Anonymous:
    boog:
    Anonymous:
    Anyway, I agree with the sentiment and I always unit test functionality that cannot be adequately exercised by an end user. But everything else I leave to the chimps in the testing department. Why make senior developers do the work of a trained monkey?

    I've found that as a developer, I'd much prefer a testing department comprised of smart humans with strong technical skills, rather than chimps. With technical testers, you get surprisingly-useful reports about defects and failing tests, making it much easier to identify the problem and correct it.

    With trained monkeys, you only get phone calls saying "program no work; fix it". The flying poo can become a bit of a problem too.

    I agree, I'd love to work with technically-minded testers, but I am begrudgingly speaking from experience. We only have one capable tester, the rest are rejects who only test because they don't have the skill to do anything else in IT. They sit at their desks grinding the organ according to the pre-defined scripts authored by the one guy that actually knows what he's doing. The monkey analogy is all too accurate, to be honest.


    Pay peanuts, etc. Sack your team, and hire people who actually want to test, have technical skills and take it seriously as both a career and a technical field. Works for Microsoft.
  • boog 2011-03-22 11:45
    Anonymous:
    I agree, I'd love to work with technically-minded testers, but I am begrudgingly speaking from experience. We only have one capable tester, the rest are rejects who only test because they don't have the skill to do anything else in IT.
    This is what I've experienced too; it's easy to tell which testers actually know what they're doing. Unfortunately, management wrongly sees testing as a mindless (and profitless) task, so they usually try to hire testers from the local zoo.

    Anonymous:
    The monkey analogy is all too accurate, to be honest.
    My advice: wear a raincoat.
  • dohpaz42 2011-03-22 11:47
    frits:
    The 100% code coverage metric is so dumb. Have these people never heard of permutation? I assume that 100% of the code is tested at least informally. I mean do some developers really write code that they've never run before releasing it?


    I don't think that it's dumb. I think that it is misunderstood. 100% code coverage, to me, means that every class/function/method has a corresponding test. The real questions are: Are the tests sufficient? Are the tests meaningful?

    Having 100% code coverage is a worthy goal to attempt to achieve, so long as you don't just try to write tests for the sake of writing tests. Can you possibly cover every single permutation of error that could possibly occur? No, and you shouldn't necessarily try (it goes back to what Alex said about the cost/effort going up as you get closer to 100%).

    As an example, if I wrote a class that had three methods in it, then to me it is reasonable that to achieve 100% code coverage I would need three unit tests. If I add a new method, and I add a new unit test, then I'm still at 100% code coverage.

    Do these tests test for every single possible outcome? Probably not. Should I care? Yes. Will I write a test for every single possible outcome? Hell no.

    To me there are two types of development: test-driven development, where I write tests [theoretically] before I write code - this is a practice that helps to shape the program, while simultaneously giving me code coverage AND documentation; and, exception-driven development, where tests are written to reproduce bugs as they are found. This helps to document known issues, reproduce said issues, and provide future test vectors to insure that future changes don't re-introduce already fixed bugs.
  • trwtf 2011-03-22 11:52
    A few years ago my old company off-shored all their testing to India. Not the development, just testing - so the off-shore team were writing and running scripts for a black-box that they didn't even begin to understand. Three bug-ridden releases later and... well, let's just say that company isn't around anymore.
  • Coyne 2011-03-22 11:54
    I agree testing designed to reach some level of quality that will never be 100%.

    Where I often see a lack in software is what I would call recoverability: The ability to correct a problem after it has occurred.

    I'm generally a cautious individual. Experience has demonstrated that I should always have a fallback plan, which is really a chain of defenses thing.

    If the application has no recoverability, then there is no fallback: The only thing between "everything is good" and "absolute disaster" is a fence called "everything works perfectly". When designing applications, one of the things that should always be done is to consider, "What if this doesn't work? What would be our fallback plan?"

    Because, if you don't think about that and plan for that, then one day something doesn't work perfectly and you find yourself in absolute disaster land because you have no other line of defense. That is actually the source of some really good stories (in here as well as in other places). I'll relate one:

    (At a previous life.) We bought a 3rd-party product for donation management. The builders of that product had a really interesting way of handling errors: They ignored all errors.

    One of the processes was the daily account apply. You entered incoming donations in a batch; the apply process would then read the batch, update the accounts and delete the batch.

    On the disaster day in question, the accounts table reached size limit part way (early on) through the processing of the batch and, since the developers ignored such mundane messages from the database as "the table is full and I can't insert this now", the process blythely continued on.

    Then it deleted the batch.

    No fallback. No way to recover the batch and so an entire day of entry by the user was lost.


    Okay, so now let's create a fallback. That's hard, right? No, in this case actually it isn't: The solution is to back up the entire database before running the apply process. Every single time a batch is to be applied! That way, if something goes wrong, you fix the problem, restore, rerun and everything is cool.

    ...and usually, fallback is just like that. It mostly consists of one single element that I often see omitted: Keep the input state so that rerun is possible. There are "bazillions" of ways to do that; take your own pick.

    But some people like to live on the edge and depend on the application doing everything right, and when it doesn't, well, glad I'm not them.
  • Anonymous 2011-03-22 11:58
    boog:
    Unfortunately, management wrongly sees testing as a mindless (and profitless) task, so they usually try to hire testers from the local zoo.

    Absolutely true, this is pretty much the crux of the problem I think.

    boog:
    My advice: wear a raincoat.

    My screen is now intimately familiar with my last mouthful of coffee, thanks!
  • Machtyn 2011-03-22 12:09
    The Ancient Programmer:
    Ed:
    Someone needs to explain that last bit to my boss. Badly.


    Why explain it to him badly?

    If you explain it badly, perhaps he will have a gooder understanding.
  • Walter Kovacs 2011-03-22 12:11
    Plato:
    quis custodiet ipsos custodes?

    ... and all the classes and procedures will look up and shout "Test us!"
    ... and I'll look down and whisper "No."
  • QJ 2011-03-22 12:14
    boog:
    Change Impact – the estimated scope of a given change; this varies from change to change and, like testing, is always a “good enough” estimate, as even a seemingly simple change could bring down the entire system

    I think about this every time somebody tells me to "just refactor".


    If a seemingly simple change can bring down the entire system, there's something fundamentally wrong with that system and it's ripe for a bit of guerrilla refactoring.
  • QJ 2011-03-22 12:17
    The Ancient Programmer:
    Ed:
    Someone needs to explain that last bit to my boss. Badly.


    Why explain it to him badly?


    Waste of time explaining it to him well. Pearls before swine.
  • Machtyn 2011-03-22 12:17
    I would like to sub scribe to this the ory about never leaving the house to avoid getting hit by a bus.

  • Brass Monkey 2011-03-22 12:18
    QJ:
    boog:
    Change Impact – the estimated scope of a given change; this varies from change to change and, like testing, is always a “good enough” estimate, as even a seemingly simple change could bring down the entire system

    I think about this every time somebody tells me to "just refactor".


    If a seemingly simple change can bring down the entire system, there's something fundamentally wrong with that system and it's ripe for a bit of gorilla refactoring.

    FTFY
  • QJ 2011-03-22 12:19
    frits:
    The 100% code coverage metric is so dumb. Have these people never heard of permutation? I assume that 100% of the code is tested at least informally. I mean do some developers really write code that they've never run before releasing it?


    In some of the places I've worked I've had to opine that it would be nice if the developers checked whether their code actually compiles in the first place.
  • C-Octothorpe 2011-03-22 12:20
    boog:
    Anonymous:
    I agree, I'd love to work with technically-minded testers, but I am begrudgingly speaking from experience. We only have one capable tester, the rest are rejects who only test because they don't have the skill to do anything else in IT.
    This is what I've experienced too; it's easy to tell which testers actually know what they're doing. Unfortunately, management wrongly sees testing as a mindless (and profitless) task, so they usually try to hire testers from the local zoo.

    Anonymous:
    The monkey analogy is all too accurate, to be honest.
    My advice: wear a raincoat.


    ... and goggles. Some of them have sniper aim...

    SIDENOTE: are there really people who WANT to test? I mean, speaking as a developer looking at the role of QA, it just doesn't seem all that appealing to me. I am not the smartest guy by far, but from my personal experience, QA tends to attract people with the intelligence levels reserved for the mildly retarded (I'd say in the neighbourhood of 100, give or take 10 pts). This usually results in people three kinds of QA monkeys:

    1) The new grad/career switch above-average smart person trying to get into software development. Some people take this path, which is fine, and in some corps a necessary step...

    2) The shitty, change resistant x-developer who couldn't hash it anymore and (in)voluntarily goes into QA. This is more rare as pride usually takes over and they just quit and f*ck up another dev shop with their presence.

    3) The mildly retarded sociopaths who don't have the intelligence or social skills necessary to make it in any other role IT related. They may be able to make something of themselves in another profession, but because they got mad skillz at Halo, they MUST be good at everything IT related, which keeps them in the QA gig.

    I realize this sounds very elitist, which isn't my intention but this is just what I have noticed/pondered...
  • QJ 2011-03-22 12:27
    Brass Monkey:
    QJ:
    boog:
    Change Impact – the estimated scope of a given change; this varies from change to change and, like testing, is always a “good enough” estimate, as even a seemingly simple change could bring down the entire system

    I think about this every time somebody tells me to "just refactor".


    If a seemingly simple change can bring down the entire system, there's something fundamentally wrong with that system and it's ripe for a bit of gorilla refactoring.

    FTFY

    +1! Sprayed coffee.
  • Jay 2011-03-22 12:28
    neminem:
    Tim:
    In fact, since the number of code paths through the entire code base grows exponentially you have covered some vanishingly small percentage of the code paths.

    I'd argue anytime your input comes from the user or from another system you have no control over the output of, you've covered exactly 0%. A finite fraction of an infinite is, after all, precisely nothing. (We once found a major blocking bug in a process iff the text input to the process started with a lowercase 'p'. That was a fun one.)

    But I'm happy to work at a computer that distinguishes between devs and testers; we certainly are responsible for testing the effects of our own code before checking in changes (that would be your #1), but the "test team" consists of people that were hired for that purpose. I sort of thought it was like that everywhere (well, everywhere where the whole company isn't a couple devs.)


    There's a difference between "all possible code paths" and "all possible inputs".

    Pedantic digression: The number of possible inputs is not infinite. There are always limits on the maximum size of an integer or lenght of a string, etc. So while the number of possible inputs to most programs is huge, it is not infinite.

    Suppose I write (I'll use Java just because that's what I write the most, all due apologies to the VB and PHP folks):


    String addperiod(String s)
    {
    if (s.length()==0)
    return s;
    else if (s.endsWith("."))
    return s;
    else
    return s+".";
    }


    There are an extremely large number of possible inputs. But there are only three obvious code paths: empty string, string ending with period, and "other". There are at least two other not-quite-obvious code paths: s==null and s.length==maximum length for a string. Maybe there are some other special cases that would cause trouble. But for a test of this function to be thorough, we would not need to try "a", "b", "c", ... "z", "aa", "ab", ... etc for billions and billions of possible values.

    That said, where we regularly get burned on testing is when we don't consider some case that turns out to create a not-so-obvious code path. Like, our code screws up when the customer is named "Null" or crazy things like that.
  • Anonymous Hacker 2011-03-22 12:29
    SIDENOTE: are there really people who WANT to test?
    Testing, the way it's usually defined? Absolutely not. But with the right job description, testing can approach pure hacking: make this program break, by any means necessary. I've certainly had fun taking an hour or two away from my own code to find the security, performance, etc. holes in my colleagues', and I can see that being an enjoyable full-time job.
  • JadedTester 2011-03-22 12:32
    C-Octothorpe:
    boog:
    Anonymous:
    I agree, I'd love to work with technically-minded testers, but I am begrudgingly speaking from experience. We only have one capable tester, the rest are rejects who only test because they don't have the skill to do anything else in IT.
    This is what I've experienced too; it's easy to tell which testers actually know what they're doing. Unfortunately, management wrongly sees testing as a mindless (and profitless) task, so they usually try to hire testers from the local zoo.

    Anonymous:
    The monkey analogy is all too accurate, to be honest.
    My advice: wear a raincoat.


    ... and goggles. Some of them have sniper aim...

    SIDENOTE: are there really people who WANT to test? I mean, speaking as a developer looking at the role of QA, it just doesn't seem all that appealing to me. I am not the smartest guy by far, but from my personal experience, QA tends to attract people with the intelligence levels reserved for the mildly retarded (I'd say in the neighbourhood of 100, give or take 10 pts). This usually results in people three kinds of QA monkeys:

    1) The new grad/career switch above-average smart person trying to get into software development. Some people take this path, which is fine, and in some corps a necessary step...

    2) The shitty, change resistant x-developer who couldn't hash it anymore and (in)voluntarily goes into QA. This is more rare as pride usually takes over and they just quit and f*ck up another dev shop with their presence.

    3) The mildly retarded sociopaths who don't have the intelligence or social skills necessary to make it in any other role IT related. They may be able to make something of themselves in another profession, but because they got mad skillz at Halo, they MUST be good at everything IT related, which keeps them in the QA gig.

    I realize this sounds very elitist, which isn't my intention but this is just what I have noticed/pondered...


    If testing is limited to "Click x. See if actual result matches expected result" then no, I doubt many people do. On the other hand, if it's "Build an automated framework for this system, then go to the architecture committee meeting to talk about testability. After that, meet with the product manager and BA to distill their requirements into Cucumber and then spend the afternoon on exploratory testing trying to break the new system" then why not?
    Done properly, QA is as much of an intellectual challenge and just as rewarding as development. The problem is that most testers are crap, and most test managers are worse, so they end up with the former definition of testing.
  • Anonymous 2011-03-22 12:33
    Hatshepsut:
    An interesting definition of Acceptance Testing ("formal or informal testing to ensure that functional requirements as implemented are valid and meet the business need").

    Where I come from, Acceptance Testing verifies that the delivered product meets the contractually agreed requirements. Verification that these requirements meet the business need is a separate matter.

    Also interesting is the designation of functional and non-functional requirements - a concept that seems to come from a strongly data-processing oriented background, rather than, say, a system-control oriented background, in which performance is often a critical functional requirement.


    I don't think you understand what functional and non-functional means in the context of software requirements. The fact that it is critical does not mean it becomes a functional requirement, it means it has to be met for the system to work correctly.

    Also some companies manufacture and sell software themselves! i.e. you can't just say; "that's what it says in the requirements". Not all software development is outsourced. For example a computer game has to go through "Acceptance Testing" with the people who are likely to use it e.g. so called "beta testing".
  • VoiceOfSanity 2011-03-22 12:35
    Hatshepsut:
    An interesting definition of Acceptance Testing ("formal or informal testing to ensure that functional requirements as implemented are valid and meet the business need").

    Where I come from, Acceptance Testing verifies that the delivered product meets the contractually agreed requirements. Verification that these requirements meet the business need is a separate matter.

    Also interesting is the designation of functional and non-functional requirements - a concept that seems to come from a strongly data-processing oriented background, rather than, say, a system-control oriented background, in which performance is often a critical functional requirement.

    From someone who has worked in the defense contractor environment for a few years now, this is very much a truth. Whether or not the item/program meets the business need of the customer is not as important as whether that item/program meets the contract requirements. You could build the most sophisticated fighter jet around for $8 million a jet, but if it doesn't meet the contract requirements then the military could care less if it saves money/lives/time. The same is true for software, as long as it meets the contract requirements (even if it's just barely within those requirements) then the software is accepted, buggy interface and occasional crashes along with it.

    Fortunately (at least in most cases) when it comes to the space program, just because it meets the contract requirements isn't enough, especially when it comes to man-rated spacecraft and space probes that'll be working for a decade or two (for example, the Cassini Saturn probe and the MESSENGER Mercury probe.) "Just good enough" doesn't cut it in situations like that, so the testing that is done is to ensure that things work, they work right and that they'll continue working in the future.

    Then again, those probes undergo the most intense real-world testing/use that anyone could envision. But that's after delivery/launch. *chuckle*
  • bar 2011-03-22 12:38
    C-Octothorpe:
    boog:
    My advice: wear a raincoat.


    ... and goggles. Some of them have sniper aim...

    SIDENOTE: are there really people who WANT to test? I mean, speaking as a developer looking at the role of QA, it just doesn't seem all that appealing to me. I am not the smartest guy by far, but from my personal experience, QA tends to attract people with the intelligence levels reserved for the mildly retarded (I'd say in the neighbourhood of 100, give or take 10 pts). This usually results in people three kinds of QA monkeys:

    1) The new grad/career switch above-average smart person trying to get into software development. Some people take this path, which is fine, and in some corps a necessary step...

    2) The shitty, change resistant x-developer who couldn't hash it anymore and (in)voluntarily goes into QA. This is more rare as pride usually takes over and they just quit and f*ck up another dev shop with their presence.

    3) The mildly retarded sociopaths who don't have the intelligence or social skills necessary to make it in any other role IT related. They may be able to make something of themselves in another profession, but because they got mad skillz at Halo, they MUST be good at everything IT related, which keeps them in the QA gig.

    I realize this sounds very elitist, which isn't my intention but this is just what I have noticed/pondered...



    If there is a trick to decent QA testing, its to Choose to do it in the first place. Choose to restrict code submits to only those being accompanied by functional unit tests. Choose to use a system of shared peer review for said code submits. Choose to have a continuous and scheduled build of your product, and to insist that fixing a broken build takes priority. Choose to have representatives from the coding teams participate in the final QA process. Choose to get signoff from all teams before releasing. Choose to have feature freezes, periodically-reviewed API documentation and believable deadlines. Choose a risk free release cycle. Choose QA for everyone (especially the managers).

    Captcha was of no consequat.
  • VoiceOfSanity 2011-03-22 12:39
    JadedTester:
    Pay peanuts, etc. Sack your team, and hire people who actually want to test, have technical skills and take it seriously as both a career and a technical field. Works for Microsoft.

    And here I thought Microsoft's beta testers were the general public once the RTM was issued out. *gryn*
  • anonym 2011-03-22 12:42
    frits:
    I mean do some developers really write code that they've never run before releasing it?


    sadly, yes
  • dkf 2011-03-22 12:43
    C-Octothorpe:
    SIDENOTE: are there really people who WANT to test?
    Yes, but it really does depend on whether you're starting out from the position of having tests already defined. When you're working with an already-highly-tested codebase that's got a good support framework in place, it's quite nice to focus strongly on TDD and trying to ensure that all new code paths that you add are fully tested. (Hint: it can result in the most amazing hacks to reliably trigger particularly awkward cases.)

    But if the code has very few tests, or if you're doing integration tests across a whole mess of dependencies, testing is a real drag; huge amounts of work for little reward.
  • Mr. Orangutan 2011-03-22 12:45
    Alex Papadimoulis:
    no matter how hard you try, a definitive answer is impossible. At best (i.e., with unlimited resources), you can be 99.999…% confident that there will be no defects in production

    but but 99.999...% IS 100%
  • boog 2011-03-22 12:47
    C-Octothorpe:
    SIDENOTE: are there really people who WANT to test? I mean, speaking as a developer looking at the role of QA, it just doesn't seem all that appealing to me. I am not the smartest guy by far, but from my personal experience, QA tends to attract people with the intelligence levels reserved for the mildly retarded (I'd say in the neighbourhood of 100, give or take 10 pts).

    I should point out that in my experience, the few technically-inclined testers with whom I've worked were actually business analysts who also participated in requirements gathering and analysis, documentation, etc. In other words, the best testers seem to be more than just testers.

    Even better, they already had a good understanding of how the software was supposed to work when they went about their testing responsibilities.
  • rfoxmich 2011-03-22 12:49
    Testing doesn't help at all if the underlying quality of the code is just bad (design flaws fragility etc);

    I'm reminded of my informal rule for live music: The quality of the band is inversely proportional to the number of sound checks they make.

  • too_many_usernames 2011-03-22 12:49
    Tim:
    Even if you have coverage of 100% of the lines of code that doesn't mean you have covered 100% of the code paths.

    That's why there's a method of testing called "decision coverage" which is generally done in addition to code coverage.

    Decision coverage doesn't test every single input to a condition.

    What it does is test each possibility (true or false) for each decision, and shows that each input into that final "true or false" has an effect on the decision. Typically if you find an input to the Boolean expression has no effect on the Boolean it's an error of some sort. It also ensures that conditions which are supposed to be mutually exclusive are, in fact, exclusive.

    This way you test all *outcomes* rather than all possible ways to get that outcome. Of course, this testing is generally used in safety-critical applications where the main goal is safe operation, not some specific functionality; the goal of this testing is to ensure safe states due to (or at least gain a full awareness of the effects of) all decision paths.
  • frits 2011-03-22 12:49
    Mr. Orangutan:
    Alex Papadimoulis:
    no matter how hard you try, a definitive answer is impossible. At best (i.e., with unlimited resources), you can be 99.999…% confident that there will be no defects in production

    but but 99.999...% IS 100%


    Congratulations, you've found the limit! You see, pendantry can also work against you.
  • VoiceOfSanity 2011-03-22 12:53
    Mr. Orangutan:
    Alex Papadimoulis:
    no matter how hard you try, a definitive answer is impossible. At best (i.e., with unlimited resources), you can be 99.999…% confident that there will be no defects in production

    but but 99.999...% IS 100%

    Not really. Let's say you've got a software package that depends on the clock being accurate. But as most folks know, computer clocks are not always that reliable, so that unless you've compensated for that in the program, you will get errors that eventually magnify to the point of failure.

    So while the clock might be 99.999% correct, it's that 0.001 that causes an error and leads to a failure of the equipment tracking an incoming Scud missile to hit the target... which lead to the death of 28 service members. Look at http://www.ima.umn.edu/~arnold/disasters/patriot.html for more information.
  • Mr. Orangutan 2011-03-22 12:56
    VoiceOfSanity:
    So while the clock might be 99.999% correct, it's that 0.001 that causes an error...

    99.999% != 99.999...%
    99.999... implies the repeating 9 ...
    as we all should know, 0.999... = 1
    therefore, 99 + 1 = 100

    next time i'll just leave my snarky comments to myself ... i'm out of orange crayons, and besides they're obviously not appreciated
  • toshir0 2011-03-22 12:58
    Jon:
    I'm throwing a WtfNotFunny exception.
    catch (WtfNotFunnyException wtf) {
    System.log(wtf);
    }
    finally {
    with (Mood.grumpy) {
    Activity.returnTo(previousActivity);
    }
    }
  • Wonk 2011-03-22 12:59
    Clearly QA people are not monkeys. They are baboons:
    http://www.newtechusa.com/ppi/talent.asp
  • Ton 2011-03-22 13:03
    Machtyn:
    I would like to sub scribe to this the ory about never leaving the house to avoid getting hit by a bus.


    I have some problems with that theory. The main one being that it is crap...

    http://cache3.asset-cache.net/xc/3176763.jpg?v=1&c=IWSAsset&k=2&d=77BFBA49EF878921F7C3FC3F69D929FDC3F7A0BDA4B902D54AE2144BAD5428282D78032B8FE88392A7CFF610D5B4FC25
  • trtrwtf 2011-03-22 13:05
    C-Octothorpe:

    SIDENOTE: are there really people who WANT to test? I mean, speaking as a developer looking at the role of QA, it just doesn't seem all that appealing to me. I am not the smartest guy by far, but from my personal experience, QA tends to attract people with the intelligence levels reserved for the mildly retarded (I'd say in the neighbourhood of 100, give or take 10 pts). This usually results in people three kinds of QA monkeys:




    If you treat testing as a sort of janotorial function, one which I suppose we need but the people we hire to do it aren't really people, not like you and me, we can get it done for cheap by hiring a few interns from the local community college and making the low coder on the totem pole supervise them - well, in that case you get what you deserve, and that's dribbling idiots who can follow a script and that's about it.

    Same with documentation - if you hire the lowest-cost warm body who can possibly be expected to do the job, you can maybe hope for something that's not very embarrassing.

    Imagine if you hired coders that way - oh, wait. Some people do, and that's why this site exists.


    1) The new grad/career switch above-average smart person trying to get into software development. Some people take this path, which is fine, and in some corps a necessary step...

    2) The shitty, change resistant x-developer who couldn't hash it anymore and (in)voluntarily goes into QA. This is more rare as pride usually takes over and they just quit and f*ck up another dev shop with their presence.

    3) The mildly retarded sociopaths who don't have the intelligence or social skills necessary to make it in any other role IT related. They may be able to make something of themselves in another profession, but because they got mad skillz at Halo, they MUST be good at everything IT related, which keeps them in the QA gig.

    I realize this sounds very elitist, which isn't my intention but this is just what I have noticed/pondered...


    Yeah, if you assume that testing doesn't matter, this is what you'll end up with.


    (by the way, IQ 100 is by definition normal - calling people with definitionally normal IQ is, I suppose as clearly elitist as you could get and still be a commoner)
  • boog 2011-03-22 13:06
    QJ:
    If a seemingly simple change can bring down the entire system, there's something fundamentally wrong with that system and it's ripe for a bit of guerrilla refactoring.

    Alex was talking about change impact. A seemingly simple change could bring down the entire system; the risk is small, but you need to consider the scope of the change you are making.

    Refactoring has a much larger change impact, since it typically touches so much more of the codebase. While my customers certainly don't mind paying shit tons extra to risk breaking their already-working software for no functional benefit, I'm just not too keen on all the extra testing I'd have to do.
  • Fred 2011-03-22 13:08
    rfoxmich:
    I'm reminded of my informal rule for live music: The quality of the band is inversely proportional to the number of sound checks they let you see them make.
    FTFY.
  • MadJo (professional software tester) 2011-03-22 13:09
    Ah, great a testing article written by a developer. (I kid, it's a good article, but I do want to leave a few notes, if you don't mind)

    Testing is done to ensure that the application that's built adheres to the design specifications and business requirements. That doesn't mean fully bug free. So indeed, 'adequate' not 'complete'.

    Someone designs the application, preferably using a standard like UML or something like that, and have the team review it.

    Then someone else develops the application, based on said designs.

    And you get someone else to test said application, with scripts based on the initial documentation, scripts that are reviewed by the team working on the project, written with a standard methodology, like ISTQB or TMAP Next, so that you can ensure the quality of the tests themselves.

    Using this set up, you can be reasonably sure, based on the depth of the used test-techniques, and the used testdata, that there won't be any show stopping bugs (in the *specified* functionality), and perhaps only minor bugs (again only in the known specified functionality).
    And with the reviews you'll have closed the "who guards the guardians" gap as well.

    This takes time, money and effort, and not every project has that in abundance (more often than not it lacks the time and resources to fully commit to it, but then you appoint key stakeholders that make time to do a basic review)

    And you can then choose to only really put a lot of effort on the stuff that's critical, those priorities are set by the client, not a developer or a tester, but a client.

    No risk, very little (or no) test.

    On the flipside: no documentation, also no test. If a developer creates a lot of extra (perhaps handy) stuff, but it isn't designed in the specs, how is the tester to know what that functionality does?
    So, often the choice is made not to test that (or only very rudimentary test, does it do something that isn't terribly bad like lock stuff up?). Unless design specs are written up, and it can be incorporated into the test. Then it will be tested more thoroughly.

    A few extra (and perhaps personal) notes:

    For me as a tester, AGILE development schemes doesn't make a lot of sense, as it can cause a lot of headache, trying to keep track off the changes in the documentation. Moving targets aren't fun if you have to make hundreds of testcases, and the specs change by the day.
    Especially since testings is almost always done on the critical path, it often makes a very tight rope to do the foxtrot on.

    And you NEVER let a developer test his own stuff, because it'll be hard get a grip on the quality in that case.
    No one is truly really critical on his or her own creations. "Oh, I know what I mean there. I'll just leave it in." (also testers shouldn't review their own testcases, let someone else do that)
  • Steve 2011-03-22 13:12
    Arghh the sad depressing sight of developers not getting testing.

    Testing is "A technical investigation done to expose quality-related information about the product under test" (Kem Caner http://kaner.com/)

    What most of you think of testing, is in fact just checking. You don't write unit tests, you are writing unit checks. Thinking about what you need to check, which code paths are relevant, which are not is the testing bit. Writing unit checks is just another thing devs need to learn. (for a better & longer explanation http://www.developsense.com/blog/2009/08/testing-vs-checking/ )

    Far too many developers are just not intellectually suited to doing testing, just as testers are not suited to doing development. That does not mean it is easy.

    I work very closely with my devs, I review their "unit tests", they help me with my automation when needed.
    Doing this we eliminate a huge chunk of defects at design time. This frees up my time to do the exploratory testing that finds the bug that happen only on certain pre-conditions that would never be found by code coverage.
  • JadedTester 2011-03-22 13:13
    MadJo (professional software tester):

    For me as a tester, AGILE development schemes doesn't make a lot of sense, as it can cause a lot of headache, trying to keep track off the changes in the documentation. Moving targets aren't fun if you have to make hundreds of testcases, and the specs change by the day.


    That's why you can use a checklist instead of test cases. Or, gasp, carry out exploratory testing. Or use BDD. Etc, etc. Did the entire context driven movement pass you by?
  • Kraagenskul 2011-03-22 13:14
    Plato is first credited with asking the question in his "The Republic", but the Latin is definitely Juvenal.
  • James Emerton 2011-03-22 13:30
    Unit Testing is the process of testing a very specific bit of code. Proper unit testing involves mocking out database or file access and testing the operation of a specific class or function.

    Integration Testing tests the full (integrated!) stack of code, such as you might find in a normal runtime. Integration tests will write to the database or filesystem, and as such they are expected to take more time to run than unit tests.

    Unfortunately, there is much confusion surrounding this issue, even among developers. Perhaps this is due to the fact that testing frameworks often have the word "Unit" in their name, in spite of the fact they can be usually applied to any sort of automated testing.
  • Sanity 2011-03-22 13:48
    Who cares if the there’s 100% code coverage when a unit test has a defect in it? Or if the requirements were misunderstood by the developer? Or if the requirements were wrong? Or if it’s not PCI compliant? Or if it breaks when it gets deployed to production?


    Because, quite frequently, tests are a good place to start when reading the code, and a good way to specify, or at least closely track, whether the code actually conforms to the specifications.

    Then, when things go wrong, you have the test framework as an entire other way to figure out what's going on.

    Also because doing it this way (particularly with unit tests) forces you to design for testability, and such a design has other benefits. Generally, when you need to fix a bug (or "correct a defect," if you like), it's extremely helpful to be able to easily isolate the problem into one small test case. Even if you haven't written the particular test that should fail in this case, it's a lot easier to do if your code has been forced to be testable. And of course, if you do write a failing test, you now have a regression test for when you actually fix it, and a useful tool for the process of fixing it.

    I agree that the real goals should be, well, the real goals. If the choice is between 100% test coverage and being able to deploy to production without breaking things, that's a no-brainer -- tests are a tool to make your program work, but clearly the goal is to make your program work. I just wanted to point out why, if you do suddenly find a really serious defect (like not being able to deploy to production), generally, the more comprehensive your test suite, the better off you are.

    Also:

    it’s not really a problem if Father Cronin needs to type in “
    ” for a line break (it probably isn’t even worth your bill rate to fix that defect),


    I guess that's TRWTF, because this makes me weep for my profession. It's not really a problem either if the parishioners need to duck because the door to the church is too low, but I would fix it anyway out of sheer embarrassment -- how hard is it to get this right?

    I understand that Father Cronin doesn't need five nines for his database, and maybe we don't have to care about Unicode support, but there's a difference between a tricky-but-obscure defect that's not worth fixing and having some pride in your craft.
  • Zylon 2011-03-22 14:00
    A little more than a year ago, I was in the market for kitchen appliances and had a pretty good idea of what I could get with my budget. It wasn’t a whole lot, but then again, neither was my budget.

    These sentences brought to you by the department of redundancy department, which is the department that brought them to you.
  • C-Octothorpe 2011-03-22 14:07
    trtrwtf:
    C-Octothorpe:

    SIDENOTE: are there really people who WANT to test? I mean, speaking as a developer looking at the role of QA, it just doesn't seem all that appealing to me. I am not the smartest guy by far, but from my personal experience, QA tends to attract people with the intelligence levels reserved for the mildly retarded (I'd say in the neighbourhood of 100, give or take 10 pts). This usually results in people three kinds of QA monkeys:




    If you treat testing as a sort of janotorial function, one which I suppose we need but the people we hire to do it aren't really people, not like you and me, we can get it done for cheap by hiring a few interns from the local community college and making the low coder on the totem pole supervise them - well, in that case you get what you deserve, and that's dribbling idiots who can follow a script and that's about it.

    Same with documentation - if you hire the lowest-cost warm body who can possibly be expected to do the job, you can maybe hope for something that's not very embarrassing.

    Imagine if you hired coders that way - oh, wait. Some people do, and that's why this site exists.


    1) The new grad/career switch above-average smart person trying to get into software development. Some people take this path, which is fine, and in some corps a necessary step...

    2) The shitty, change resistant x-developer who couldn't hash it anymore and (in)voluntarily goes into QA. This is more rare as pride usually takes over and they just quit and f*ck up another dev shop with their presence.

    3) The mildly retarded sociopaths who don't have the intelligence or social skills necessary to make it in any other role IT related. They may be able to make something of themselves in another profession, but because they got mad skillz at Halo, they MUST be good at everything IT related, which keeps them in the QA gig.

    I realize this sounds very elitist, which isn't my intention but this is just what I have noticed/pondered...


    Yeah, if you assume that testing doesn't matter, this is what you'll end up with.


    (by the way, IQ 100 is by definition normal - calling people with definitionally normal IQ is, I suppose as clearly elitist as you could get and still be a commoner)


    My bad, I didn't realize 100 was normal (I thought 110 or 115 was), so you can keep your vitriol...

    Either way, I'm not disagreeing with you or the whole concept of testing. It is very necessary, however what I was attempting to illustrate is that it is often grossly underrated by management and other key decision makers. This results in conditions outlined in my rant (mouth-breathers going through 'Test Case 43, Steps 1-8'). They are usually only slightly more tech savvy (i.e. setup router) than your average person.
  • Clockwork Orange 2011-03-22 14:07
    Alex's Soapbox was what made The Daily WTF a keeper. Sadly, we don't get enough of these insightful articles.

    Keep 'em coming, Alex?
  • I Push Pixels 2011-03-22 14:07
    Depending on where you are, it may be the organization itself that prevents the testers from acting like they know what they're doing.

    As a former tester, I've been on both sides of the equation, and I have been in situations where the lead tester demanded that we stick to the test script, icebergs (and crashes!) be damned, and as a coder, I have talked to the testers and found out, surprise, surprise, many of them have a pretty good idea of what's happening under the hood (even if only from a layperson's perspective), but alas, they've been instructed, on pain of death, to stick to the script and keep their bug reports non-technical.

    I came from the game industry, which is frequently a massive worse-than-failure unto itself and goes out of its way to perpetuate animosity between the engineers and the testing staff, so things may be different.
  • I Push Pixels 2011-03-22 14:08
    That was in regard to the 'qa monkey' comment.
  • Sobriquet 2011-03-22 14:15
    trtrwtf:
    (by the way, IQ 100 is by definition normal


    No, the distribution of IQ is normal. 100 is the expected value.

    - calling people with definitionally normal IQ [retarded] is, I suppose as clearly elitist as you could get and still be a commoner)


    I can't be a commoner... not when I'm a citizen of imperialist America.
  • SalukiJim 2011-03-22 14:27
    How about the REAL WTF here. A company I worked at let all of its testers go, leaving only the actual devs to test. When you ask a dev to test (self included), we have "blinders" on for the list of possible ways to break something...And to this day there is no dedicated test dept. In an IT dept of 40-50 devs, not mom & pop and the chimp :)
  • Alex Papadimoulis 2011-03-22 14:30
    Sanity:
    it’s not really a problem if Father Cronin needs to type in “<br />” for a line break (it probably isn’t even worth your bill rate to fix that defect),


    I guess that's TRWTF, because this makes me weep for my profession. It's not really a problem either if the parishioners need to duck because the door to the church is too low, but I would fix it anyway out of sheer embarrassment -- how hard is it to get this right?


    I chose that example because it's relatively easy to forget to add .Replace("\n", "<br />") when outputting HTML, and it's something I'm sure we've all done. While that example is trivial to fix, there are plenty of defects that are neither the builder's nor the client's fault, and that are *not* trivial to fix (let's say, a full week of time).

    The onus is on the party that assumed the risk of the unknown. If it's a "fixed bid" agreement, then it's the professional obligation of the builder to fix and deliver the promised results. But if it's a "time and materials" agreement, the client has to pay to fix it. If they don't think it's worth 40 hours to fix it, even the most dedicated crafstman will not give up a full week just to have a better craft.
  • trtrwtf 2011-03-22 14:34
    [quote user="C-Octothorpe"]
    My bad, I didn't realize 100 was normal (I thought 110 or 115 was), so you can keep your vitriol...

    No vitriol intended. Just pointing out that there was a place where you were clearly being elitist. Unintentionally, as it turns out.

    I agree with you on the symptoms, and it sounds like you agree with me on the cause. No fuss, no bother.
  • Silfax 2011-03-22 14:39
    Zelda:
    programmers use all their salary to buy dates


    They call them hookers in these parts...

    Anyway I thought this was the daily wtf, not slashdot.
  • dogbrags 2011-03-22 14:56
    What??? Did Alex become Jeff (Atwood) overnight?
  • clive 2011-03-22 14:59
    C-Octothorpe:
    SIDENOTE: are there really people who WANT to test?


    Have a look at James Bach's stuff - there's a man who wants to test, and appears to be quite good at it.
  • нагеш 2011-03-22 15:08
    [quote="Alex Papadimoulis"]

    It’s been a rough couple weeks. Not only did I have all sorts of catching-up to do after Code PaLOUsa, but it also happened to be release week. And oh, do I hate release week.

    [/quote]

    And apparently, you also died and coped with alcoholism, at the same time.
  • blakeyrat 2011-03-22 15:19
    neminem:
    Tim:
    In fact, since the number of code paths through the entire code base grows exponentially you have covered some vanishingly small percentage of the code paths.

    I'd argue anytime your input comes from the user or from another system you have no control over the output of, you've covered exactly 0%. A finite fraction of an infinite is, after all, precisely nothing. (We once found a major blocking bug in a process iff the text input to the process started with a lowercase 'p'. That was a fun one.)

    But I'm happy to work at a computer that distinguishes between devs and testers; we certainly are responsible for testing the effects of our own code before checking in changes (that would be your #1), but the "test team" consists of people that were hired for that purpose. I sort of thought it was like that everywhere (well, everywhere where the whole company isn't a couple devs.)


    I recently came across one where a database was happy to store the string "\0\0\0test" when almost nothing else in the environment saw it as anything but an empty string. Good luck writing a unit test for that one *before* you encounter it...
  • blakeyrat 2011-03-22 15:25
    C-Octothorpe:
    SIDENOTE: are there really people who WANT to test? I mean, speaking as a developer looking at the role of QA, it just doesn't seem all that appealing to me. I am not the smartest guy by far, but from my personal experience, QA tends to attract people with the intelligence levels reserved for the mildly retarded (I'd say in the neighbourhood of 100, give or take 10 pts). This usually results in people three kinds of QA monkeys:

    1) The new grad/career switch above-average smart person trying to get into software development. Some people take this path, which is fine, and in some corps a necessary step...

    2) The shitty, change resistant x-developer who couldn't hash it anymore and (in)voluntarily goes into QA. This is more rare as pride usually takes over and they just quit and f*ck up another dev shop with their presence.

    3) The mildly retarded sociopaths who don't have the intelligence or social skills necessary to make it in any other role IT related. They may be able to make something of themselves in another profession, but because they got mad skillz at Halo, they MUST be good at everything IT related, which keeps them in the QA gig.

    I realize this sounds very elitist, which isn't my intention but this is just what I have noticed/pondered...


    I did testing for Xbox 360 games for a year. I loved it, absolutely loved it, and the developers loved me because (since I loved it) I found a ton more bugs than the drooling morons who are usually in that position. And I daresay I'm pretty goddamned good at it.

    The problem is that there's no career in it. The only way to make a career in QA is to move up into management, where instead of actually testing the software, you do nothing but herd the drooling morons around.

    If you're a developer, you generally can work an entire career while still developing software; that doesn't exist in QA.
  • phew 2011-03-22 15:28
    QJ:
    frits:
    The 100% code coverage metric is so dumb. Have these people never heard of permutation? I assume that 100% of the code is tested at least informally. I mean do some developers really write code that they've never run before releasing it?


    In some of the places I've worked I've had to opine that it would be nice if the developers checked whether their code actually compiles in the first place.


    No shit. We have about 450 build targets in the automated build system. About a dozen are currently broken. And often it's obvious the code couldn't compile. (In theory, everything is supposed to be peer-reviewed too)
  • Matt Westwood 2011-03-22 15:29
    frits:
    Mr. Orangutan:
    Alex Papadimoulis:
    no matter how hard you try, a definitive answer is impossible. At best (i.e., with unlimited resources), you can be 99.999…% confident that there will be no defects in production

    but but 99.999...% IS 100%


    Congratulations, you've found the limit! You see, pendantry can also work against you.


    100% confidence is not 100% certainty. You could be 100% confident and 100% deluded.
  • JB 2011-03-22 15:51
    The Ancient Programmer:
    Ed:
    Someone needs to explain that last bit to my boss. Badly.


    Why explain it to him badly?


    No, see, his boss is called "Badly". Like "Joe", or "Ed".
  • JB 2011-03-22 15:53
    Mr. Orangutan:

    (snip)
    0.999... = 1
    therefore, 99 + 1 = 100


    So, a = b therefore c + b = d?

    You see, pendantry can also work against you.
  • anon#213 2011-03-22 16:00
    I thought the wtf was the fridge being too short. That small amount of useless space would drive me crazy.
  • trtrwtf 2011-03-22 16:11
    anon#213:
    I thought the wtf was the fridge being too short. That small amount of useless space would drive me crazy.


    Your pretty easily driven crazy, aren't you?
  • trtrwtf 2011-03-22 16:11
    anon#213:
    I thought the wtf was the fridge being too short. That small amount of useless space would drive me crazy.


    Your pretty easily driven crazy, aren't you?
  • C-Octothorpe 2011-03-22 16:17
    trtrwtf:
    anon#213:
    I thought the wtf was the fridge being too short. That small amount of useless space would drive me crazy.


    Your pretty easily driven crazy, aren't you?


    Your pretty easily driven crazy, aren't you?
  • C-Octothorpe 2011-03-22 16:17
    trtrwtf:
    anon#213:
    I thought the wtf was the fridge being too short. That small amount of useless space would drive me crazy.


    Your pretty easily driven crazy, aren't you?


    Your pretty easily driven crazy, aren't you?
  • C-Octothorpe 2011-03-22 16:18
    C-Octothorpe:
    trtrwtf:
    anon#213:
    I thought the wtf was the fridge being too short. That small amount of useless space would drive me crazy.


    Your pretty easily driven crazy, aren't you?


    Your pretty easily driven crazy, aren't you?


    *let's watch and see what happens...*
  • anon#213 2011-03-22 16:20
    trtrwtf:
    anon#213:
    I thought the wtf was the fridge being too short. That small amount of useless space would drive me crazy.


    Your pretty easily driven crazy, aren't you?

    DAMMIT ITS "YOU'RE", NOT "YOUR"!!!!!!!
  • Mr. Orangutan 2011-03-22 16:21
    JB:
    Mr. Orangutan:

    (snip)
    0.999... = 1
    therefore, 99 + 1 = 100


    So, a = b therefore c + b = d?

    You see, pendantry can also work against you.

    Okay, so I skipped a step that I thought should have been obvious to the casual reader here.

    So, if A = 99 and B = 0.999..., then A + B = 99.999... (the original). Since 0.999... = 1, B = 1, thus 99.999... = A + B = 99 + 1 = 100
  • trtrwtf 2011-03-22 16:21
    anon#213:
    trtrwtf:
    anon#213:
    I thought the wtf was the fridge being too short. That small amount of useless space would drive me crazy.


    Your pretty easily driven crazy, aren't you?

    DAMMIT ITS "YOU'RE", NOT "YOUR"!!!!!!!


    Thanks, C-Octo. I owe you a beer.
    (I love to see them pop like that)
  • Mr. Orangutan 2011-03-22 16:24
    JB:
    Mr. Orangutan:

    (snip)
    0.999... = 1
    therefore, 99 + 1 = 100


    So, a = b therefore c + b = d?

    You see, pendantry can also work against you.


    And for the record, I wasn't being pedantic .. I was being snarky ... and now I'm going to be sardonic ... it's "pedantry" not "pendantry"
  • frits 2011-03-22 16:25
    Matt Westwood:
    frits:
    Mr. Orangutan:
    Alex Papadimoulis:
    no matter how hard you try, a definitive answer is impossible. At best (i.e., with unlimited resources), you can be 99.999…% confident that there will be no defects in production

    but but 99.999...% IS 100%


    Congratulations, you've found the limit! You see, pendantry can also work against you. <==lol. I Muphryed that one.


    100% confidence is not 100% certainty. You could be 100% confident and 100% deluded.


    Wrong kind of confidence.
  • нагеш 2011-03-22 16:32
    Machtyn:
    I would like to sub scribe to this the ory about never leaving the house to avoid getting hit by a bus.



    You still run the risk of an airplane crashing through your roof, or heck, a bus running through your front door. Maybe Nagesh and his taxi will let themselves in.
  • Darth Kaner 2011-03-22 16:37
    I have tested your software. Pray I do not test it any further.
  • Bert Glanstron 2011-03-22 16:42
    Dear Darth Kaner,

    In case you can’t tell, this is a grown-up place. The
    fact that you insist on using your ridiculous handle
    clearly shows that you’re too young and too stupid
    to be commenting on TDWTF.

    Go away and grow up.

    Sincerely,
    Bert Glanstron
  • C-Octothorpe 2011-03-22 16:42
    trtrwtf:
    anon#213:
    trtrwtf:
    anon#213:
    I thought the wtf was the fridge being too short. That small amount of useless space would drive me crazy.


    Your pretty easily driven crazy, aren't you?

    DAMMIT ITS "YOU'RE", NOT "YOUR"!!!!!!!


    Thanks, C-Octo. I owe you a beer.
    (I love to see them pop like that)


    Yeah, it usually starts with a slight twitch in one eye, and goes from there...
  • mwanaheri 2011-03-22 17:21
    from the last two years of maintaining a background-application (converting edifact messages), I'd suggest two main features:
    1. fallback/repair strategies for anything that goes wrong
    2. monitoring. Absolutely essential. If something goes wrong in a background process, let me know immediately so that the data can be fixed while no further damage has been caused.


    I don't care for speed, don't mind a clumsy ui, won't be happy with a strict verification of the standard (most partners have their 'specials' anyway). If then I can nicely configure the converter, I'm happy. Even with some bugs.
  • MadJo (Professional tester) 2011-03-22 17:39
    Exploratory testing implies more in depth knowledge of the system than I normally had, only after a month or so I could start doing exploratory testing, well inside the second iteration. By that time, I'd have had to rewrite hundreds of these testcases.

    BDD, yes, as a matter of fact, this IS the first time I heard of it. And after reading up on it, I could not have applied it to the project I was working with.

    Checklists are all fine and dandy, but when you have a system interface test to do, with hundreds of variables, where the interface specs change by the day. You can't keep up with checklists alone. Especially if they demand a full test every god damn release. (And sadly I was at the bottom of the food chain)
  • Ryan 2011-03-22 18:02
    Coyne:
    Okay, so now let's create a fallback. That's hard, right? No, in this case actually it isn't: The solution is to back up the entire database before running the apply process. Every single time a batch is to be applied! That way, if something goes wrong, you fix the problem, restore, rerun and everything is cool.


    Or, you know, use a database transaction to update the accounts. Rollback is much easier than restoring a database from back up. Though if your developers insist on ignoring errors, you should probably do the backup too.

    Fault tolerance (the design principle that encompasses recoverability) definitely doesn't get the attention it deserves in most software development. Fault-tolerant operating systems are pretty popular in embedded systems (no joke) to protect applications from each other and restart things gracefully when something does crash. The closest thing we get on the desktop is "sorry about your luck; would you like to report this crash?" and the occasional document or browser tab recovery.
  • pgn674 2011-03-22 19:46
    Alex Papadimoulis:
    Like many things that converge on perfection, there are significantly increasing costs as you approach 100%. A five-minute smoke test may only provide 40% certainly, but it may cost five hours of testing to achieve 60%, and fifty hours to achieve 80%.
    You might want to choose better example numbers here. When you do the first jump, you go from 5 minutes to 5 hours (5hr/5min= 60 times as much), and from 40% to 60% (60/40= 1.5 times as much). That's a 1.5/60= 0.025 goodness efficiency you get from this jump.

    But in the second example, you go from 5 hours to 50 hours (50/5= 10 times as much), and from 60% to 80% (80/60= 1.333 times as much). That's a 1.333/10= 0.133 goodness efficiency you get from the second jump. The second jump is better than the first jump, and the point you wanted to make through your example was that the jumps would get progressively worse.

    You want this to come out True, where 'p' is percentage, and 't' is time:
    (p2/p1)/(t2/t1)>(p3/p2)/(t3/t2)
  • Sutherlands 2011-03-22 20:03
    pgn674:
    Alex Papadimoulis:
    Like many things that converge on perfection, there are significantly increasing costs as you approach 100%. A five-minute smoke test may only provide 40% certainly, but it may cost five hours of testing to achieve 60%, and fifty hours to achieve 80%.
    You might want to choose better example numbers here. When you do the first jump, you go from 5 minutes to 5 hours (5hr/5min= 60 times as much), and from 40% to 60% (60/40= 1.5 times as much). That's a 1.5/60= 0.025 goodness efficiency you get from this jump.

    But in the second example, you go from 5 hours to 50 hours (50/5= 10 times as much), and from 60% to 80% (80/60= 1.333 times as much). That's a 1.333/10= 0.133 goodness efficiency you get from the second jump. The second jump is better than the first jump, and the point you wanted to make through your example was that the jumps would get progressively worse.

    You want this to come out True, where 'p' is percentage, and 't' is time:
    (p2/p1)/(t2/t1)>(p3/p2)/(t3/t2)
    What? No you don't. What you want is to show that each progressive increase takes a lot longer, not that the.... degradation of efficiency is not as much. <snipped>
  • mystery guest 2011-03-22 20:16
    Ed:
    Someone needs to explain that last bit to my boss. Badly.

    I can explains to him. Badly. Very Badly, in fact.
  • Mr Frost 2011-03-22 20:21
    I don't understand how testing would have had any impact in identifying a bug that was created post-deployment (the fridge was scratched in transit after it had been built).

    There was a bug in my computer - it stopped working after I dropped it. I think this is a result of the windows operating system not having been sufficiently tested for this scenario.
    Maybe I don't understand testing.
  • Rohypnol 2011-03-22 20:36
    dadasd:
    One real WTF is the number of developers (yes, 341777, I'm looking at you) who still think unit testing is a testing technique, rather than a programming one.


    I think that dude was a tester - he talked about large sums of money...
  • Tester 1A 2011-03-22 20:45
    Maurizio:
    I have a problem with this:
    > Integration Testing – does the program function?

    What does "the program function" means ? Doesn't crash ? That is easy. But what if just behave differently from expected ?
    Than, what is expected ? What is expected is define in the functional specifications, so what is the real difference between integration and functional testing ?

    My personal definition, that i am using in my work in a big IT departement, is that integration test verify that a codebase correspond to a technical design, i.e. that the different modules interacts as the architect/developer decided, while the functional tests verify that the design and the code actually correspond to the functional requirements.

    Opinions ?


    Surely integration testing tests that the program 'works' once all the components are integrated. That is, individual components have been tested (unit testing?) and are believed to work. They are then chucked together in one massive conglomeration. The Integration test basically looks to make sure that 'things' work - that is, we don't crash unexpectedly, any results we present appear valid (though not necessarily correct) etc.

    Functional testing tests that the application works as expected. That the results it returns are correct (given the requirements) and that the application doesn't simply do something (which is what integration testing does) but that the application does what the requirements say that the application does.

    Integration testing can be done (reasonably) blind. We don't deliberately test using real, or even realistic data, and we don't necessarily need an awareness of how the application will be used. We check that menu items do things (we don't care whether that's necessarily what they're meant to). Functional testing, on the other hand, would check that selecting a menu item does exactly what the requirements (or design, or every other document) indicates that it does what it should do.

    There is a fine line, to a degree, insofar as selecting a 'Print' button probably has a certain expectation that a print dialogue is displayed (even in integration testing). Functional testing would require this dialogue to have a specific appearance, and possibly a specific functionality of its own....
  • Dehli Belly 2011-03-22 20:57
    boog:
    Anonymous:
    Anyway, I agree with the sentiment and I always unit test functionality that cannot be adequately exercised by an end user. But everything else I leave to the chimps in the testing department. Why make senior developers do the work of a trained monkey?

    I've found that as a developer, I'd much prefer a testing department comprised of smart humans with strong technical skills, rather than chimps. With technical testers, you get surprisingly-useful reports about defects and failing tests, making it much easier to identify the problem and correct it.

    With trained monkeys, you only get phone calls saying "program no work; fix it". The flying poo can become a bit of a problem too.


    Hmm....I think it's a 'horses for courses' thing. I think some testing by highly technical people is required (it should be noted that some testers are highly technical people). I think some testing should be done by people with average technical skill (ie follow the instruction type testing that a monkey should be able to do - if they find a problem, it is easy enough to walk through it with them to work out what [s]they messed up[/s] the issue is).
    Some testing then needs to be done with real monkeys. Preferably the tech savvy ones who are proficient in opening all manner of files by double clicking them, and have the expertise to inadvertently cripple any system you might allow them use, and have a very good memory (they have memorized word for word "I didn't do anything, but the computer.... and I don't remember what I did before"

    I would have thought testing requires all sorts of testing by all sorts of different parties to give the product the best chance at quality. Technical only testing doesn't work, because technical people aren't necessarily looking for results to mean the same thing. "Broken" doesn't mean the same thing either, nor any one of a number of descriptions for problems that occur. This site brings many opf them to light....
  • AceCoder 2011-03-22 21:04
    JadedTester:
    I mean do some developers really write code that they've never run before releasing it?


    Yes. Unfortunately, the amount of testing a developer does tends to have a negative correlation with the amount of testing their code actually needs. There's usually no clearer sign of a developer who belongs on the dole queue than one who thinks their code will work first time.


    Perhaps part of this problem stems from estimates at the planning stage that make the same assumption? It will work the first time it is written, if not, an extra 5 minutes should suffice for me to track down and repair any issues...
  • Jimbo 2011-03-22 21:15
    QJ:
    boog:
    Change Impact – the estimated scope of a given change; this varies from change to change and, like testing, is always a “good enough” estimate, as even a seemingly simple change could bring down the entire system

    I think about this every time somebody tells me to "just refactor".


    If a seemingly simple change can bring down the entire system, there's something fundamentally wrong with that system and it's ripe for a bit of guerrilla refactoring.


    indeed:
    Consider original code

    int a=3, b=4;
    printf("%d + %d = %d\n",a,b a+b);


    version 1.1

    char *numWord[] = { "one", "two", "three", "four"};
    int a=3; b=4;
    printf("%s + %s = %d\n", numWord[a], numWord[b], a+b);


    small changes can have massive impacts.

    Something about butterflies flapping their wings and tsunamis...
  • Asdg 2011-03-22 21:22
    QJ:
    frits:
    The 100% code coverage metric is so dumb. Have these people never heard of permutation? I assume that 100% of the code is tested at least informally. I mean do some developers really write code that they've never run before releasing it?


    In some of the places I've worked I've had to opine that it would be nice if the developers checked whether their code actually compiles in the first place.


    Nagesh is on holidays.
    When did you work in India? Failing to compile is a bug that should be picked up by testers well before deployment into production. Good quality programmers are expensive and shouldn't waste time on such trivialities as compiling code.

    On a more serious note, I have been in situations where this appears to be the case, but it is simply that not everything that was changed was committed. In particular, I've found header files missing. Also, an individual component may build fine, but issues may arise when a full build is attempted depending on files mutually used (i remember one example where a library had been changed, and the signature of some methods had changed as a result. This meant that other components that relied on that same library wouldn't build under a full release. Admittedly, the libraries are shared in wierd ways, and there should be mechanisms to minimise the chance of this happening and a load of other things, but we don't always pick the codebases we inherit. His deployment went through fine, but when subsequent changes had to be made, it took a while to realise that the library had been destroyed by a previous change) or something.
  • lolwtf 2011-03-22 21:55
    Coyne:
    One of the processes was the daily account apply. You entered incoming donations in a batch; the apply process would then read the batch, update the accounts and delete the batch.

    On the disaster day in question, the accounts table reached size limit part way (early on) through the processing of the batch and, since the developers ignored such mundane messages from the database as "the table is full and I can't insert this now", the process blythely continued on.

    Then it deleted the batch.

    No fallback. No way to recover the batch and so an entire day of entry by the user was lost.
    Reminds me of a friend's early attempt at a message board in PHP. They weren't familiar with databases, so they stored everything in flat files. All user information was stored in a file (one line per user, much like CSV except with some control character instead of commas); all messages were stored in one giant file again separated by some control character.

    When they went to implement editing posts, they realized you can't really insert things into the middle of a file (if the new post is longer than the old one), so they implemented it as:
    1) Create a new posts file
    2) Copy all of the posts before the one being edited into the new file
    3) Write the edited post into the file just as a new post would bew written
    4) Copy all of the posts after it into the new file
    5) Replace the original file with the new file.
    This worked well enough; it was of course slow but it was a small enough forum that it wasn't really an issue. Until eventually that file grew to be quite a few megabytes - and the server started to run low on disk space.
    Traffic was low enough that the posts file itself didn't grow to fill the disk - but then someone tried to edit a post. After copying just a few posts into the new file, it ran out of space. It then proceeded to blindly ignore the fact that the following writes were failing, until it had finished "copying" all of the posts into the new file - then deleted the original file and replaced it with the now much smaller "updated" copy. Whoops.

    As a fun side note, they also didn't think to validate their inputs, so you could become an administrator just by putting the control character used for field separation in your username followed by a 1. You'd have a post count of "January" and other such nonsense, but oh well.
  • Yais isn't it, wot? 2011-03-22 22:08
    MadJo (professional software tester):


    And you NEVER let a developer test his own stuff, because it'll be hard get a grip on the quality in that case.
    No one is truly really critical on his or her own creations. "Oh, I know what I mean there. I'll just leave it in." (also testers shouldn't review their own testcases, let someone else do that)


    Agree 100% that developers should never test their own stuff. Aside from all else, Developers have in mind the problem that they are trying to fix, and will most likely only test for whether that problem is fixed (not necessarily whether other cases still work). Developers (especially support developers working on bug fixes) tend to have a closed world view, where all they see is the problem they are addressing. This, coupled with the fact that most Devs back themselves (this is normal - it is very difficult to get anything done if you doubt your own work) and are likely to under-test and make assumptions about their change working as intended, makes testing by developers only a touch dangerous (that said, I would expect any reasonable developer to have at least performed basic sanity checking to make sure that the change appears to have made the difference expected).
  • Yais isn't it, wot? 2011-03-22 22:20
    Sanity:

    <some snipping>
    I agree that the real goals should be, well, the real goals. If the choice is between 100% test coverage and being able to deploy to production without breaking things, that's a no-brainer -- tests are a tool to make your program work, but clearly the goal is to make your program work. I just wanted to point out why, if you do suddenly find a really serious defect (like not being able to deploy to production), generally, the more comprehensive your test suite, the better off you are.

    <some more snipping>

    I understand that Father Cronin doesn't need five nines for his database, and maybe we don't have to care about Unicode support, but there's a difference between a tricky-but-obscure defect that's not worth fixing and having some pride in your craft.


    I'll assume you just worded things a little strangely in your haste....tests are not a tool to make your program work, but rather a tool that highlights items in your program that do not work. An analogy (though not a very good one necessarily) might be the gauges on the dash in a car are not tools that help your car work, but rather tools that help you find issues with your car.

    As for FR Cronin, while I agree with your point about pride, I thought Alex's point was that the criticality of an issue is based on a number of factors, and that an inconvenience to someone who is running a very non-critical system is not a high priority (especially if they have been engaged at a discount rate).
  • Alex Papadimoulis 2011-03-22 22:47
    Mr Frost:
    I don't understand how testing would have had any impact in identifying a bug that was created post-deployment (the fridge was scratched in transit after it had been built).


    It's a stretched analogy to begin with, but the idea here is more that the supply chain has a defect. In theory, LG could have invested more to fix this defect, but they instead accepted the loss of profit.
  • Grandma Nazzi 2011-03-22 22:53
    Clockwork Orange:
    Alex's Soapbox was what made The Daily WTF a keeper. Sadly, we don't get enough of these inciteful articles.

    Keep 'em coming, Alex?


    FTFY
  • jeo 2011-03-22 22:56
    I Push Pixels:
    Depending on where you are, it may be the organization itself that prevents the testers from acting like they know what they're doing.

    As a former tester, I've been on both sides of the equation, and I have been in situations where the lead tester demanded that we stick to the test script, icebergs (and crashes!) be damned, and as a coder, I have talked to the testers and found out, surprise, surprise, many of them have a pretty good idea of what's happening under the hood (even if only from a layperson's perspective), but alas, they've been instructed, on pain of death, to stick to the script and keep their bug reports non-technical.

    I came from the game industry, which is frequently a massive worse-than-failure unto itself and goes out of its way to perpetuate animosity between the engineers and the testing staff, so things may be different.


    Worse still, it seems to promote an attitude of 'fix (or remove) the offending test script', rather than lets fix the actual problem. I have worked in a place where it was acknowledged that some of the test cases appeared to fail, but this was related to bad coding in the test case rather than bad coding in the application. I never really understood why we couldn't simply remove the test cases we knew to be broken (or change their expected result if we were happy that they failed).
  • John Muller 2011-03-22 23:39
    Some code coverage tools actually check the possible paths through functions/methods, not just lines/statements.
  • Coyne 2011-03-23 00:42
    Ryan:
    Coyne:
    Okay, so now let's create a fallback. That's hard, right? No, in this case actually it isn't: The solution is to back up the entire database before running the apply process. Every single time a batch is to be applied! That way, if something goes wrong, you fix the problem, restore, rerun and everything is cool.


    Or, you know, use a database transaction to update the accounts. Rollback is much easier than restoring a database from back up. Though if your developers insist on ignoring errors, you should probably do the backup too.


    Remember that in this case, the developers did not check for errors: No detection = blindly commit = the error becomes permanent.

    And therein lies the hitch, because even if the developers did normally check for errors, the check might have been accidentally omitted for the critical INSERT statement. We call those bugs. And then if no one actually tested to see what happened if the INSERT failed well then no detection = blindly commit = the error becomes permanent (just as before).

    Ryan:
    Fault tolerance (the design principle that encompasses recoverability) definitely doesn't get the attention it deserves in most software development. Fault-tolerant operating systems are pretty popular in embedded systems (no joke) to protect applications from each other and restart things gracefully when something does crash. The closest thing we get on the desktop is "sorry about your luck; would you like to report this crash?" and the occasional document or browser tab recovery.


    I'm not sure I agree that recoverability lies entirely within "fault tolerance". Fault tolerance tends to focus on the idea that we work around, set aside, or ignore problems that occur. It tends to ignore the question of what comes after tolerance; that is, "We've tolerated the error, now how does it get corrected?"

    My focus is hospital business (as opposed to something neat like rocketry). Suppose we make an error in processing an account: The result is that the payer refuses to pay. Generally we can resubmit...but to be able to resubmit we must be able to correct the error. Somewhere within the paper or electronic medical record, we must have enough information to reconstruct what happened and come up with a correct result; the alternative is a write-off (lost $). As processing shifts away from paper toward electronic, it becomes more important for all the original inputs to be retained: Otherwise we may be helpless to figure out exactly what went wrong.

    Reconsider the "donations batch" story: I guess you could say that was the ultimate fault tolerant system (since all faults were ignored).The issue was really the deletion of the batch: A fundamental design flaw in my view (whether faults are ignored or not). With the batch gone, no matter what went wrong, it can't be fixed.

    There's lots of ways to keep the batch, but if you don't and there was an error of any kind you are helpless to recover. Current thinking on fault tolerance doesn't seem to deal completely with this area of recoverability.
  • J.D. 2011-03-23 01:33
    The article has a point, but what to do, when everything is done almost exactly the opposite way? Minimal or no testing at all, or in the best cases, the code is written and then committed to the repository, never having even been compiled. This is usually the problem with some individuals, as it was implied in the article, that you should have some pride in your work. If you constantly skip every problem or defect you come across by saying "ain't my job", then the problem is you.

    Unfortunately I have had the "pleasure" of working with these kind of guys among the years. Perhaps my most infuriating, and at the same time funniest, comment was "do it fast now, you can fix it later". And that was on the debate of whether I will write bad code or not. Writing bad code on purpose is the worst thing about these people. The same time it takes to think what you're doing and do it as right as possible on the first run is most probably faster than writing the bad code first, then again, then fix it and then get someone else to fix it, because you forgot, what it was supposed to do in the beginning.

    I've also done several programs that do tests, for code and for hardware. Usually the problem with testing software in general, at least in our company, is that the guys who write the documentation, aren't software designers, and usually the information, especially with hardware testers, is the view-point special documents, that tell nothing to the developer of what they want, what the product should be doing, and how it should be tested.

    Perhaps there should be some kind of tests for the people, who are being hired to do a certain job. Oh, but hey, interviews are considered Acceptance testing. But what about the other four points of testing? Haven't seen those implied on the interviewing stage. Perhaps they should reconsider....
  • Matt Westwood 2011-03-23 02:25
    Asdg:
    QJ:
    frits:
    The 100% code coverage metric is so dumb. Have these people never heard of permutation? I assume that 100% of the code is tested at least informally. I mean do some developers really write code that they've never run before releasing it?


    In some of the places I've worked I've had to opine that it would be nice if the developers checked whether their code actually compiles in the first place.


    Nagesh is on holidays.
    When did you work in India? Failing to compile is a bug that should be picked up by testers well before deployment into production. Good quality programmers are expensive and shouldn't waste time on such trivialities as compiling code.

    On a more serious note, I have been in situations where this appears to be the case, but it is simply that not everything that was changed was committed. In particular, I've found header files missing. Also, an individual component may build fine, but issues may arise when a full build is attempted depending on files mutually used (i remember one example where a library had been changed, and the signature of some methods had changed as a result. This meant that other components that relied on that same library wouldn't build under a full release. Admittedly, the libraries are shared in wierd ways, and there should be mechanisms to minimise the chance of this happening and a load of other things, but we don't always pick the codebases we inherit. His deployment went through fine, but when subsequent changes had to be made, it took a while to realise that the library had been destroyed by a previous change) or something.


    Yes, those things happen, no doubt about it ("Who hasn't done this?" or whatever the catchphrase is), and it's something that regular scheduled builds (using Cruise Control or Hudson or something sweet like that) usually catch adequately. As those tools do with those knuckle-draggers who haven't checked to see whether it compiles. Unfortunately, because such tools stop the code in its tracks before it *can* get as far as Test, the perceived seriousness of this perpetration is smaller than perhaps it ought to be. ("What, Bubba checked in a java class with undeclared variables again? Cruise Control caught it. Never mind, we fixed it for him ...")
  • Matt Westwood 2011-03-23 02:30
    Alex Papadimoulis:
    Mr Frost:
    I don't understand how testing would have had any impact in identifying a bug that was created post-deployment (the fridge was scratched in transit after it had been built).


    It's a stretched analogy to begin with, but the idea here is more that the supply chain has a defect. In theory, LG could have invested more to fix this defect, but they instead accepted the loss of profit.


    They might have done both. "Gahd dammit, Wolverine, that's your last screw-up! Go and get a job as a barber or something! Hmm ... reckon someone would buy this thing cheap? It's only scratched ..."
  • EPA 2011-03-23 04:46
    TRWTF are American refrigerators, obviously. Everything's always bigger in the US...
  • Ernold 2011-03-23 05:09
    When I explain all of this to that enthusiastic developer, the response is sometimes along the lines of, “but that’s not my job, so who cares?”


    I rarely see that attitude. More likely, "But as the developer, I'm least qualified to say whether it's production-ready". Which is true.
  • flukus 2011-03-23 05:28
    James Emerton:
    Unit Testing is the process of testing a very specific bit of code. Proper unit testing involves mocking out database or file access and testing the operation of a specific class or function.

    Integration Testing tests the full (integrated!) stack of code, such as you might find in a normal runtime. Integration tests will write to the database or filesystem, and as such they are expected to take more time to run than unit tests.

    Unfortunately, there is much confusion surrounding this issue, even among developers. Perhaps this is due to the fact that testing frameworks often have the word "Unit" in their name, in spite of the fact they can be usually applied to any sort of automated testing.


    I came here expecting to find this comment. I was disappointed it din't appear until the second page of comments.
  • Ernold 2011-03-23 05:42
    boog:
    C-Octothorpe:
    SIDENOTE: are there really people who WANT to test? I mean, speaking as a developer looking at the role of QA, it just doesn't seem all that appealing to me. I am not the smartest guy by far, but from my personal experience, QA tends to attract people with the intelligence levels reserved for the mildly retarded (I'd say in the neighbourhood of 100, give or take 10 pts).

    I should point out that in my experience, the few technically-inclined testers with whom I've worked were actually business analysts who also participated in requirements gathering and analysis, documentation, etc. In other words, the best testers seem to be more than just testers.

    Even better, they already had a good understanding of how the software was supposed to work when they went about their testing responsibilities.


    How about developers who WANT to test?
  • Magnus Persson 2011-03-23 05:56

    That's why there's a method of testing called "decision coverage" which is generally done in addition to code coverage.

    Decision coverage doesn't test every single input to a condition.

    What it does is test each possibility (true or false) for each decision, and shows that each input into that final "true or false" has an effect on the decision. Typically if you find an input to the Boolean expression has no effect on the Boolean it's an error of some sort. It also ensures that conditions which are supposed to be mutually exclusive are, in fact, exclusive.

    This way you test all *outcomes* rather than all possible ways to get that outcome. Of course, this testing is generally used in safety-critical applications where the main goal is safe operation, not some specific functionality; the goal of this testing is to ensure safe states due to (or at least gain a full awareness of the effects of) all decision paths.


    And then you also have path coverage, where every possible path through the decisions is counted.

    The above mentioned

    if (x <= 0)
    obj = NULL;
    if (x >= 0)
    obj.doSomething();

    can be given 100% line AND decision coverage with only two test cases (x<0 and x>0), without finding the null pointer exception. However, with full path coverage, you need to add a third test case (x:=0), exposing also the potential null pointer exception.

    The downside with path coverage is that it's harder to measure. In more complicated work, it may not at all be evident that the fourth possible path (both if statements evaluate to false) is never possible (e.g. if the value of x may be changed in the first execution). Hence path coverage is harder to measure than line or decision coverage.

    Then, as discussed previously in the thread, you may also have to discuss coverage of user inputs etc...
  • PurpleDog 2011-03-23 06:45
    frits:
    The 100% code coverage metric is so dumb. Have these people never heard of permutation? I assume that 100% of the code is tested at least informally. I mean do some developers really write code that they've never run before releasing it?


    Oh yes... they most certainly do! I work with some of them.
  • Niels 2011-03-23 08:13
    Interesting read... You should really check ISTQB or ASTQB

    http://www.istqb.org (or) http://www.astqb.org/

    Particulary the foundation glossary. It has some of the concepts you mention here completely chewed out for you + tons more. (Especially the test levels and the risk calculation).

    Regards,
    Niels (ISTQB CTAL TM)
  • Niels 2011-03-23 08:15
    PurpleDog:
    frits:
    The 100% code coverage metric is so dumb. Have these people never heard of permutation? I assume that 100% of the code is tested at least informally. I mean do some developers really write code that they've never run before releasing it?


    Oh yes... they most certainly do! I work with some of them.

    I'm afraid I have to go with him.. Btw, code coverage only requires you to cover all code at least once. If you want all permutation, you're looking for decision coverage.

    Still, I think there are more intelligent ways to pick the only the most interesting cases to test.
  • GalacticCowboy 2011-03-23 08:19
    To follow up on what Coyne said, here's another real-world example. For the past while I've been supporting and extending the "project from hell". The original developer quit unexpectedly within weeks of the project being "done", and as we began digging into the code it became apparent why he had done so: by quitting at that time, he could still give the illusion that the project worked and was on track, when in reality it was a steaming pile of excrement.

    The project is primarily written in Silverlight, with WCF services talking to a SQL Server backend through an Enterprise Library ORM. (Except for the EL, this is all pretty standard stuff. None of the rest of us had seen EL in active production use in 3 or 4 years.)

    This wasn't the developer's first Silverlight app, though it was his first "professional" one and the first one where he tried to use MVVM. However, he tried to stuff MVVM into his existing framework of knowledge instead of the other way around. So in some cases it's as far from MVVM as one can possibly be, while in others it kind of flirts with the edges of what one might consider to be MVVM-like.

    One of the developer's "innovations" was the use of an XML property bag in his objects for all fields that were not directly linked to the object's lifetime. Hey, great - now we don't have to adjust the database schema when something changes, amirite?

    Ultimately, a number of bugs around this system helped us to realize:

    * The "change tracking" (internal state management) in the objects was fragile and in most cases done wrong - for example, for some of the objects, merely loading one from the database was sufficient to trip the "I've changed" condition.
    * The lack of robust error checking - both for sanity checking and actual business logic - allowed the data to be corrupted easily. This ranged from partial corruption, where fields in the XML would not be updated correctly to complete corruption where the XML would be entirely wiped out or reset to a default state.
    * The security model prevented users from seeing any data besides their own. However, the underlying data was still silently downloaded *AND PERSISTED* by all users for all other users. Due to the fragility mentioned above, this meant that, if something went wrong on Alex's computer, it could wipe out my data for no apparent reason.

    Several weeks ago, for an unrelated item, we added a trigger-based audit trail on several key tables in the database. Basically, whenever any event occurs on a record, its values are stored as a snapshot in a separate table. However, this audit trail has now saved our butts twice when some kind of data corruption occurred and we were able to easily identify the last-known-good state of certain records and immediately roll them back to that state.

    The ultimate fix - sanity and business rule checks on both the client and server side and preventing users' client apps from uploading any data besides their own - is still in testing, targeting a production deployment later this week. Until then, we're babysitting the database using an audit trail that wasn't even designed for this specific purpose.
  • Lone Star Male 2011-03-23 08:32
    EPA:
    TRWTF are American Texan refrigerators, obviously. Everything's always bigger in the US Texas...

    FTFY...and when they say everything, they mean everything, ladies.
  • JadedTester 2011-03-23 08:36
    Niels :
    Interesting read... You should really check ISTQB or ASTQB

    http://www.istqb.org (or) http://www.astqb.org/

    Particulary the foundation glossary. It has some of the concepts you mention here completely chewed out for you + tons more. (Especially the test levels and the risk calculation).

    Regards,
    Niels (ISTQB CTAL TM)


    Because being a good tester is about being able to correctly answer 26/40 multiple choice questions.
  • Anonymous 2011-03-23 08:37
    Lone Star Male:
    EPA:
    TRWTF are American Texan refrigerators, obviously. Everything's always bigger in the US Texas...

    FTFY...and when they say everything, they mean everything, ladies.

    Yep, a perfect description of the Texan ego.
  • Lone Star Male 2011-03-23 08:41
    Anonymous:
    Lone Star Male:
    EPA:
    TRWTF are American Texan refrigerators, obviously. Everything's always bigger in the US Texas...

    FTFY...and when they say everything, they mean everything, ladies.

    Yep, a perfect description of the Texan ego.

    Well, it is hard to hide...if you know what I mean.
  • Frank 2011-03-23 09:20
    Wow. This is the biggest WTF I've ever seen. A software tester who understands that 100% code coverage != quality! What's next? A software quality person who understands that more process won't always prevent the introduction of bugs? Or a manager who understands both? (OK, I know the manager thing is simply asking for too much.)
  • Machtyn 2011-03-23 09:30
    нагеш:
    Machtyn:
    I would like to sub scribe to this the ory about never leaving the house to avoid getting hit by a bus.



    You still run the risk of an airplane crashing through your roof, or heck, a bus running through your front door. Maybe Nagesh and his taxi will let themselves in.


    Oh, come on... If akismet would have allowed me to post a very fine picture or the link of a school bus that had crashed through a house, I would have. My problem is I should have completed my critique stating as much. (Do a GIS for "bus crashes into house"... that got me results.)

    /CAPTCHA augue: what two people do when they have a disagweement.
  • Spurgeon General 2011-03-23 10:01
    Lone Star Male:
    Anonymous:
    Lone Star Male:
    EPA:
    TRWTF are American Texan refrigerators, obviously. Everything's always bigger in the US Texas...

    FTFY...and when they say everything, they mean everything, ladies.

    Yep, a perfect description of the Texan ego.

    Well, it is hard to hide...if you know what I mean.

    I had an uncle from Texas who was bit by a rattlesnake. After 3 days of intense pain, the rattlesnake died.
  • bob 2011-03-23 10:35

    The foundation guy notices a problem with the plans, but says that the framer will fix it. The framer says that the drywaller will fix it, the drywaller says the finish carpenter will fix it, the finish carpenter says the painter will fix it, and the painter says “I sure hope the homeowner is blind and doesn’t see it.”


    The general contractor is the person you need to deal with, not the sub-contractors. The general contractor will be the one to make it fixed and he'll delegate.

    You can test all day but there will always be something wrong somewhere that is missed. Flawed logic statements or flaws in building materials always happen and you'll never see either until something happens.
  • boog 2011-03-23 10:38
    Dehli Belly:
    I would have thought testing requires all sorts of testing by all sorts of different parties to give the product the best chance at quality. Technical only testing doesn't work, because technical people aren't necessarily looking for results to mean the same thing. "Broken" doesn't mean the same thing either, nor any one of a number of descriptions for problems that occur. This site brings many opf them to light....

    That depends on the product and its potential users, but I'm not talking about "types" of testing anyway. Yeah, technical-only testing probably isn't enough; I don't think anyone said otherwise.

    I'm talking specifically about the testers' ability to clearly report failing tests and defects. How do you expect to efficiently deal with bugs if you don't have any details about them?
  • golddog 2011-03-23 10:42
    C-Octothorpe:
    trtrwtf:
    anon#213:
    trtrwtf:
    anon#213:
    I thought the wtf was the fridge being too short. That small amount of useless space would drive me crazy.


    Your pretty easily driven crazy, aren't you?

    DAMMIT ITS "YOU'RE", NOT "YOUR"!!!!!!!


    Thanks, C-Octo. I owe you a beer.
    (I love to see them pop like that)


    Yeah, it usually starts with a slight twitch in one eye, and goes from there...


    Clouseau!!!!
  • Cue Hilarious Laughter 2011-03-23 10:45
    Lone Star Male:
    Anonymous:
    Lone Star Male:
    EPA:
    TRWTF are American Texan refrigerators, obviously. Everything's always bigger in the US Texas...

    FTFY...and when they say everything, they mean everything, ladies.

    Yep, a perfect description of the Texan ego.

    Well, it is hard to hide...if you know what I mean.

    This article is about tests. I think your thinking of testes.
  • C-Octothorpe 2011-03-23 11:02
    Cue Hilarious Laughter:
    Lone Star Male:
    Anonymous:
    Lone Star Male:
    EPA:
    TRWTF are American Texan refrigerators, obviously. Everything's always bigger in the US Texas...

    FTFY...and when they say everything, they mean everything, ladies.

    Yep, a perfect description of the Texan ego.

    Well, it is hard to hide...if you know what I mean.

    This article is about tests. I think your thinking of testes.


    There's a difference?
  • anon 2011-03-23 11:12
    Yes. Knuth would be an obvious example, as in at least one of his correspondance he notes "Beware of bugs in the above code; I have only proved it correct, not tried it."

    Also, the amount of people who turn "i=i+1;" statements into "i++;" statements; run automagic code formatters w/o testing; and just plain old 'it runs once, so every line in this file is automagically good' kind of people
  • boog 2011-03-23 11:36
    The WTF is my damn penguin, I'm pretty sure I'd strangle the where?
  • callcopse 2011-03-23 12:09
    TRWTF is clearly insecure septic tanks trying to advertise their purportedly outsized genitalia in programming blog comments. Who's that for the benefit of then?
  • Rob 2011-03-23 12:18
    C-Octothorpe:
    boog:
    Anonymous:
    I agree, I'd love to work with technically-minded testers, but I am begrudgingly speaking from experience. We only have one capable tester, the rest are rejects who only test because they don't have the skill to do anything else in IT.
    This is what I've experienced too; it's easy to tell which testers actually know what they're doing. Unfortunately, management wrongly sees testing as a mindless (and profitless) task, so they usually try to hire testers from the local zoo.

    Anonymous:
    The monkey analogy is all too accurate, to be honest.
    My advice: wear a raincoat.


    ... and goggles. Some of them have sniper aim...

    SIDENOTE: are there really people who WANT to test? I mean, speaking as a developer looking at the role of QA, it just doesn't seem all that appealing to me. I am not the smartest guy by far, but from my personal experience, QA tends to attract people with the intelligence levels reserved for the mildly retarded (I'd say in the neighbourhood of 100, give or take 10 pts). This usually results in people three kinds of QA monkeys:

    1) The new grad/career switch above-average smart person trying to get into software development. Some people take this path, which is fine, and in some corps a necessary step...

    2) The shitty, change resistant x-developer who couldn't hash it anymore and (in)voluntarily goes into QA. This is more rare as pride usually takes over and they just quit and f*ck up another dev shop with their presence.

    3) The mildly retarded sociopaths who don't have the intelligence or social skills necessary to make it in any other role IT related. They may be able to make something of themselves in another profession, but because they got mad skillz at Halo, they MUST be good at everything IT related, which keeps them in the QA gig.

    I realize this sounds very elitist, which isn't my intention but this is just what I have noticed/pondered...


    Maybe I'm some freak of nature; but I tend to enjoy things that other developers roll their eyes at. I have no preference between working on bugs in existing code vs. writing some new feature. From what I've seen, most people hate fixing bugs (particularly when it's code they didn't write). Likewise, I enjoy testing.

    The problem is, it feels like my career options are greater as a software developer than a software tester. If I go to Dice or CareerBuilder or whatever, there are a lot of dev jobs I fit the profile for, but fewer QA type jobs. And the pay of the QA type jobs is lower (most of the time). And, with every year of additional development work I gain, the harder it seems to switch. An entry level QA job doesn't pay enough. The well paying QA job requires X years of experience in QA, and as a developer, I don't have that.
  • Yankee Doodle 2011-03-23 12:20
    callcopse:
    TRWTF is clearly insecure septic tanks trying to advertise their purportedly outsized genitalia in programming blog comments. Who's that for the benefit of then?


    TRWTF is cockney rhyming slang.
  • hoodaticus 2011-03-23 12:21
    Rob:
    The problem is, it feels like my career options are greater as a software developer than a software tester. If I go to Dice or CareerBuilder or whatever, there are a lot of dev jobs I fit the profile for, but fewer QA type jobs. And the pay of the QA type jobs is lower (most of the time). And, with every year of additional development work I gain, the harder it seems to switch. An entry level QA job doesn't pay enough. The well paying QA job requires X years of experience in QA, and as a developer, I don't have that.
    Just be thankful you were born with the gift of programming. Not everyone can do it - no matter how smart they are otherwise.
  • C-Octothorpe 2011-03-23 13:09
    Rob:

    The problem is, it feels like my career options are greater as a software developer than a software tester. If I go to Dice or CareerBuilder or whatever, there are a lot of dev jobs I fit the profile for, but fewer QA type jobs. And the pay of the QA type jobs is lower (most of the time). And, with every year of additional development work I gain, the harder it seems to switch. An entry level QA job doesn't pay enough. The well paying QA job requires X years of experience in QA, and as a developer, I don't have that.


    That's the point I'm trying to make is that IF you can make more money and have greater career opportunities, why would you settle for QA (lower pay usually, dead-end job unless you go into mgmt)? I just leave it at "to each his/her own".

    I know of a few people who went from doing development fulltime into IT security (white-hat hacking, etc.), which I can see, but I have yet to see a "good" developer (or person who can easily do development) go into QA.
  • C-Octothorpe 2011-03-23 13:12
    C-Octothorpe:
    Rob:

    The problem is, it feels like my career options are greater as a software developer than a software tester. If I go to Dice or CareerBuilder or whatever, there are a lot of dev jobs I fit the profile for, but fewer QA type jobs. And the pay of the QA type jobs is lower (most of the time). And, with every year of additional development work I gain, the harder it seems to switch. An entry level QA job doesn't pay enough. The well paying QA job requires X years of experience in QA, and as a developer, I don't have that.


    That's the point I'm trying to make is that IF you can make more money and have greater career opportunities, why would you settle for QA (lower pay usually, dead-end job unless you go into mgmt)? I just leave it at "to each his/her own".

    I know of a few people who went from doing development fulltime into IT security (white-hat hacking, etc.), which I can see, but I have yet to see a "good" developer (or person who can easily do development, or more than a temp gig because it's cool (I would test a Mercedes SLS for free damnit!)) go into QA.


    ... sorry, in response to what boog said a little while back.
  • Tzimisce 2011-03-23 13:38
    I disagree. Unit testing is different from integration testing. Unit tests should be testing classes in isolation, and should not involve external resources (databases, file system, web services, etc.). At least for TDD, a primary purpose of unit tests is to drive design.

    Permitting customers or managers to dictate low quality levels is unprofessional, in my opinion. Uncle Bob Martin has many presentations on this subject. Is it ethical for an electrician to do a shoddy job in a new housing development because the developers asked him to? To quote Kent Beck, "Quality is terrible as a control variable."
  • boog 2011-03-23 14:03
    Tzimisce:
    I disagree. Unit testing is different from integration testing. Unit tests should be testing classes in isolation, and should not involve external resources (databases, file system, web services, etc.).

    Given Alex's definition of Integration Testing, automated unit tests fit squarely under that category. You may certainly disagree with his definition (or more specifically the "Integration" label he attached to the definition), but the whole point of automated unit testing is to test newly-integrated changes to the code, so Alex's relating of the two is appropriate.
  • The Sound of 1 Foot Tapping 2011-03-23 14:54
    Lazy Bastard:
    It’s been a rough couple weeks. Not only did I have all sorts of catching-up to do after Code PaLOUsa, but it also happened to be release week. And oh, do I hate release week.

    So, what's your freakin' exuse now?
  • Design Pattern 2011-03-23 15:07
    The Sound of 1 Foot Tapping:

    So, what's your freakin' ex use now?

    So today Axel did not pass, but broke up with his drug-addicted girl friend?

    CAPTCHA: mara - is this the name of the girl friend?
  • Nagesh 2011-03-23 15:08
    Design Pattern:
    The Sound of 1 Foot Tapping:

    So, what's your freakin' ex use now?

    So today Axel did not pass, but broke up with his drug-addicted girl friend?

    CAPTCHA: mara - is this the name of the girl friend?


    You come here like crack addict and make excuse for others?
  • HonoredMule 2011-03-23 15:25
    I'd just like to point out: it is still theoretically possible to get hit by a bus even if you never leave your house.

    99.999%...
  • Nagesh 2011-03-23 16:26
    HonoredMule:
    I'd just like to point out: it is still theoretically possible to get hit by a bus even if you never leave your house.

    99.999%...


    <insert toy bus photo here>
  • helix 2011-03-23 16:28
    i would use it for space to store trays - you know the big ones that go on your lap to eat dinner when you are in front of the telly alone
  • The Coward 2011-03-23 16:28
    HonoredMule:
    I'd just like to point out: it is still theoretically possible to get hit by a bus even if you never leave your house.

    99.999%...


    Or is it you hitting the bus? :)

    Anyhow my experience with TDD is that people spend ages faffing about with writing tests and not actually getting anywhere fast. I knock together something and get it working fast, then consider writing tests which aid in "and how could this be done better?".

    After all in just about every other creative human endeavour you start out with rough sketches then see how this suits your needs (architect designing a new house, artist making a new character, etc). Then you increase the resolution and accuracy as you progress along (almost like zooming in on a fractal). You don't get the architect and some builders in the middle of a field and start building. In addition you are adding "mass" to the system which makes it more time consuming to go back and change fundamentals (re-factoring) when you could be more lean. (Funny how Agile types get rid of most/all of the spec/analysis docs then end up writing them in a "cool" new way).

    In the future I'd like to see something like JTAG where you do not write separate code for tests but have probes/tripwires in the real code (even part of the language? ADA95?) and then have datasets which can setup that and check values that pass by; rather than having test code which can be obscure in itself.

    My favourite issue with tests is that they do not really say if your product actually works, they just say for this very specific dataset this is what will happen in this environment. I have seen many a testing suite that results in 100% green lights but actually fails in real use, simply because the test data is not aligned with the real world data/usage! (very apparent for data driven apps).

    Lastly people who are 100% code coverage fundamentalists seem to build code that has little fault tolerance in it [in my experience anyways]. I always try to make a program/function fail gracefully and clean up after itself. It's all very well that exception been thrown because it is the "correct" thing to do. Doesn't help your users at 2am in the morning if it all just disappears into a black hole!

    Test in reality are like the Chinese proverb about a man and the time:

    A man with one watch knows the time.
    A man with two watches is never quite sure.

    How many people here change their tests/data to get things to pass? This is where a proper analysis/spec document explaining the why is required.

    Ah well, let's wait and see what the next software development band wagon is!
  • DevsDontGetItEither 2011-03-23 17:09

    So you don't like testing at least some of your own code ... hm.
    And don't like fixing bugs ... hm.
    Both of which are needed in any -worthwhile- s/w project (one that is sure to be a steaming pile of dung without either of the above).

    Explain to me, why you are a developer, then?
  • Juqiel 2011-03-23 17:23
    Lone Star Male:
    Anonymous:
    Lone Star Male:
    EPA:
    TRWTF are American Texan refrigerators, obviously. Everything's always bigger in the US Texas...

    FTFY...and when they say everything, they mean everything, ladies.

    Yep, a perfect description of the Texan ego.

    Well, it is hard to hide...if you know what I mean.

    A Texan ego is hard to hide?
  • DeLos 2011-03-23 17:25
    I thought the defect was going to be that he bought a refrigerator too short. That gap at the top is unsightly.
  • Dehli Belly 2011-03-23 17:27
    boog:
    Dehli Belly:
    I would have thought testing requires all sorts of testing by all sorts of different parties to give the product the best chance at quality. Technical only testing doesn't work, because technical people aren't necessarily looking for results to mean the same thing. "Broken" doesn't mean the same thing either, nor any one of a number of descriptions for problems that occur. This site brings many opf them to light....

    That depends on the product and its potential users, but I'm not talking about "types" of testing anyway. Yeah, technical-only testing probably isn't enough; I don't think anyone said otherwise.

    I'm talking specifically about the testers' ability to clearly report failing tests and defects. How do you expect to efficiently deal with bugs if you don't have any details about them?


    The way it was expressed (from memory) suggested (to me at least) that having techos test would fix the problem because they could better articulate the problem.

    <copout>
    That said, I wasn't necessarily disagreeing or arguing either, merely discussing
    </copout>
  • id10t 2011-03-23 17:28
    anon:
    Yes. Knuth would be an obvious example, as in at least one of his correspondance he notes "Beware of bugs in the above code; I have only proved it correct, not tried it."

    Also, the amount of people who turn "i=i+1;" statements into "i++;" statements; run automagic code formatters w/o testing; and just plain old 'it runs once, so every line in this file is automagically good' kind of people


    Didn't Knuth the Polar Bear die recently?
  • The Coward 2011-03-23 17:32
    DevsDontGetItEither:

    So you don't like testing at least some of your own code ... hm.
    And don't like fixing bugs ... hm.
    Both of which are needed in any -worthwhile- s/w project (one that is sure to be a steaming pile of dung without either of the above).

    Explain to me, why you are a developer, then?


    Not sure if you are addressing that to me or not, but anyhow I am not saying testing is bad, just that a) understand the limits of it and b) don't sit there spending a day writing tests for code that does not exist and has no form yet.

    Us it as an aid to make the code better designed, sure, use it as a starting point, well not so much.

    I think the issue is that requirements are soft at first but writing tests first makes it concrete and absolute without the ability to play with the form first. For example:

    "I want a car and door handles about waist height"

    Code world:
    Test:
    Door handle at x,y,z coordinate.
    Code:
    Build code with door handle at x,y,z coordinate.

    Show to user(s),
    "Yeah it is ok but a bit higher would be nice and more laid in"

    Result:
    Lot of "production" level code and tests to change.

    Real world:
    Make foam door quickly (hack and slash), stick block of foam as handle on it.

    Show to user(s),
    Let them move it about.

    Result:
    Cheap none production item which then can be realised later to production quality with hard set tests.

    This is how most real things are made, inexpensive as possible first then once realised the form make to production quality with all quality tools to bear.


    Interesting side point; Software via unit tests and IOC etc result in many more pieces of software with more interfaces and interactions. Real world production of items is always driving down the number of parts and interfaces as these are what make it more expensive/less reliable.

    Perhaps software should be built twice, once shitty and quick, twice to production quality (and learning from mistakes). But hey who is going to pay for the second part? ;)

  • Matt Westwood 2011-03-23 17:52
    helix:
    i would use it for space to store trays - you know the big ones that go on your lap to eat dinner when you are in front of the telly alone


    Nope, can't grasp the context. What is this "alone" you speak of?
  • boog 2011-03-23 18:05
    Dehli Belly:
    The way it was expressed (from memory) suggested (to me at least) that having techos test would fix the problem because they could better articulate the problem.

    Not "techos" necessarily, but somewhat-intelligent life forms with even a mildly-technical mindset, much more so than trained monkeys. And not that they can articulate the problem itself (that is, not the cause of the bug), but rather
    1) what the unexpected behavior was;
    2) a screenshot or any error details presented, if any; and
    3) the conditions that led to this behavior (actions, date/time, inputs, etc.).

    The lesser-testers just send an email saying "it's not working", leaving you to figure out what that even means.

    Dehli Belly:
    <copout>
    That said, I wasn't necessarily disagreeing or arguing either, merely discussing
    </copout>
    As was I, my communicative cohort. As was I.

    Addendum (2011-03-23 18:12):
    Also: Not that having smarter testers would "fix the problem", but rather that I prefer smarter testers over chimps.
  • Ho Hum 2011-03-23 18:08
    Rob:
    C-Octothorpe:
    boog:
    Anonymous:
    I agree, I'd love to work with technically-minded testers, but I am begrudgingly speaking from experience. We only have one capable tester, the rest are rejects who only test because they don't have the skill to do anything else in IT.
    This is what I've experienced too; it's easy to tell which testers actually know what they're doing. Unfortunately, management wrongly sees testing as a mindless (and profitless) task, so they usually try to hire testers from the local zoo.

    Anonymous:
    The monkey analogy is all too accurate, to be honest.
    My advice: wear a raincoat.


    ... and goggles. Some of them have sniper aim...

    SIDENOTE: are there really people who WANT to test? I mean, speaking as a developer looking at the role of QA, it just doesn't seem all that appealing to me. I am not the smartest guy by far, but from my personal experience, QA tends to attract people with the intelligence levels reserved for the mildly retarded (I'd say in the neighbourhood of 100, give or take 10 pts). This usually results in people three kinds of QA monkeys:

    1) The new grad/career switch above-average smart person trying to get into software development. Some people take this path, which is fine, and in some corps a necessary step...

    2) The shitty, change resistant x-developer who couldn't hash it anymore and (in)voluntarily goes into QA. This is more rare as pride usually takes over and they just quit and f*ck up another dev shop with their presence.

    3) The mildly retarded sociopaths who don't have the intelligence or social skills necessary to make it in any other role IT related. They may be able to make something of themselves in another profession, but because they got mad skillz at Halo, they MUST be good at everything IT related, which keeps them in the QA gig.

    I realize this sounds very elitist, which isn't my intention but this is just what I have noticed/pondered...


    Maybe I'm some freak of nature; but I tend to enjoy things that other developers roll their eyes at. I have no preference between working on bugs in existing code vs. writing some new feature. From what I've seen, most people hate fixing bugs (particularly when it's code they didn't write). Likewise, I enjoy testing.

    The problem is, it feels like my career options are greater as a software developer than a software tester. If I go to Dice or CareerBuilder or whatever, there are a lot of dev jobs I fit the profile for, but fewer QA type jobs. And the pay of the QA type jobs is lower (most of the time). And, with every year of additional development work I gain, the harder it seems to switch. An entry level QA job doesn't pay enough. The well paying QA job requires X years of experience in QA, and as a developer, I don't have that.


    I tend to agree that I am equally happy in support as development (although development where I had full control to dictate design and technology choice might be interesting). I had a manger once who insisted that all developers prefer to be in development roles (I was arguing that I was more than happy not to be rotated between support and development). He found that bizarre because he claimed that people like development because they get moire exposure ("Wow, Look...we actually released something"). Frankly, I suspect support staff get far more credit because they fix problems that already exist...That is, they stop problems that are already affecting users (<cynicism>not introducing new ones</cynicism>). Clients don't appreciate a good product until it has proven itself a good product. By this time, development teams are long forgotten....
  • fatnino 2011-03-23 19:30
  • Gary S 2011-03-23 20:59
    MadJo (professional software tester):


    And you NEVER let a developer test his own stuff, because it'll be hard get a grip on the quality in that case.
    No one is truly really critical on his or her own creations. "Oh, I know what I mean there. I'll just leave it in." (also testers shouldn't review their own testcases, let someone else do that)


    What on earth are you talking about? Of course the developer has to test their own stuff. How else will they know they've actually done the job? Someone else should also test their stuff, certainly, but the developer has to be the first person to give it a thorough test.

    One of the biggest scourges in this industry are developers who never test their work. "Oh, the unit tests / QA team will find any problems, so I don't need to bother". Too many times I've seen some idiot implement a repeater with paging controls and tell me it's done, and found that had they tried even ONCE to see if the paging worked, they would have realised that it didn't.

    Developers need to be responsible for their work. That means they must test what they do thoroughly. By all means have additional testing to verify their work, but this business about never "letting" developers test is a complete load of horseshit. By the time the work gets to the testing team, bugs should be extremely hard to find or non-existent. That does not happen by accident.
  • Jason 2011-03-24 01:09
    I know many others have said similar things, but this seems to be becoming the bi-weekly WTF (maybe tri-weekly)....
  • Mathy 2011-03-24 02:06
    Great post, while I agree on the point you're trying to make that at best (with unlimited resources) you can be 99.99 to some finite degree confident and never 100%, the math nerd in me wants to point out that:

    99.9999... = 100

    100/9 = 11.111...
    9 * (100/9) = 9 * (11.111...)
    900/9 = 99.999...
    100 = 99.999...
    QED

  • MacFrog 2011-03-24 06:26
    Or, rather recently, from around where I live:



    BTW: This is not funny. Two people died in this accident.
  • trwtf 2011-03-24 07:08
    MacFrog:
    Or, rather recently, from around where I live:



    BTW: This is not funny. Two people died in this accident.

    BWAHAHAHAHAHA, casualties! Wait, what?
  • Design Pattern 2011-03-24 08:09
    Nagesh:
    HonoredMule:
    I'd just like to point out: it is still theoretically possible to get hit by a bus even if you never leave your house.

    99.999%...


    <insert toy bus photo here>


    Toy bus photo here!
  • martin 2011-03-24 08:11
    minimize the codebase and to keep things as simple as possible, thereby reducing the number of components and the overall complexity. But that’s a whole different soapbox.


    That's an interesting soapbox. I'd love to hear some thoughts on that one. Especially on telling the customer not to have another useless expensive feature.
  • minime 2011-03-24 08:34
    C-Octothorpe:
    SIDENOTE: are there really people who WANT to test?


    I met a few, and they don't really want to test everything, but they have a certain interest in proving that what other people did was wrong, and they came up with really intresting ways to rape our systems.


    Another thing that has not really being discussed here is that testing is usually one of the first things (after documentation) that goes over board when hard deadlines have to be met. And sometimes the deadlines can be so hard that you throw everything over board and have the go-live as an integration test... that dead people is the unfortunate thing called "life outside the ivory tower".
  • boog 2011-03-24 08:42
    The Sound of 1 Foot Tapping:
    Lazy Bastard:
    It’s been a rough couple weeks. Not only did I have all sorts of catching-up to do after Code PaLOUsa, but it also happened to be release week. And oh, do I hate release week.

    So, what's your freakin' exuse now?

    I'm pretty sure I would strangle you before you got any further. Look, this is Alex's site, and if he wants to flush it down the toilet by never posting any articles, that's his perogative. If you don't like it, you can go start your own humour/software site.

    BTW, if you do this, please post a link here. With the lack of articles lately, there's been nothing to do but turn to trolling.
  • The Explainer 2011-03-24 08:50
    boog:
    The Sound of 1 Foot Tapping:
    Lazy Bastard:
    It’s been a rough couple weeks. Not only did I have all sorts of catching-up to do after Code PaLOUsa, but it also happened to be release week. And oh, do I hate release week.

    So, what's your freakin' exuse now?

    I'm pretty sure I would strangle you before you got any further. Look, this is Alex's site, and if he wants to flush it down the toilet by never posting any articles, that's his perogative. If you don't like it, you can go start your own humour/software site.

    BTW, if you do this, please post a link here. With the lack of articles lately, there's been nothing to do but turn to trolling.


    Pretty much, lately it's been The Daily "WTF? No article again?"
  • Michael Jordan 2011-03-24 08:51
    The Explainer:
    boog:
    The Sound of 1 Foot Tapping:
    Lazy Bastard:
    It’s been a rough couple weeks. Not only did I have all sorts of catching-up to do after Code PaLOUsa, but it also happened to be release week. And oh, do I hate release week.

    So, what's your freakin' exuse now?

    I'm pretty sure I would strangle you before you got any further. Look, this is Alex's site, and if he wants to flush it down the toilet by never posting any articles, that's his perogative. If you don't like it, you can go start your own humour/software site.

    BTW, if you do this, please post a link here. With the lack of articles lately, there's been nothing to do but turn to trolling.


    Pretty much, lately it's been The Daily "WTF? No article again?"


    Oh, see, you thought "W" was for "What". It's actually for "When."
  • Name Your 2011-03-24 08:54
    minime:
    C-Octothorpe:
    SIDENOTE: are there really people who WANT to test?


    I met a few, and they don't really want to test everything, but they have a certain interest in proving that what other people did was wrong, and they came up with really intresting ways to rape our systems.


    Another thing that has not really being discussed here is that testing is usually one of the first things (after documentation) that goes over board when hard deadlines have to be met. And sometimes the deadlines can be so hard that you throw everything over board and have the go-live as an integration test... that dead people is the unfortunate thing called "life outside the ivory tower".


    I think you meant "dear people", unless you were talking about Therac-25 or something.
  • Roger Wolff 2011-03-24 09:12
    Alex, nice article...

    But you're making it sound as if testing is only done to catch errors introduced by changes. As if the testers are supposed to ignore errors that slipped by during testing of the previous release. This of course is not intended or wanted.

    Testing should ensure the quality of the upcoming release. Not just the changes from the last version to the upcoming version.

    Suppose a tester finds a possible bug that has been in the codebase for ages. "won't fix" because it wasn't introduced with a change for the upcoming release? Absurd.
  • Pedantic Fool 2011-03-24 10:02
    Jason:
    I know many others have said similar things, but this seems to be becoming the bi-weekly WTF (maybe tri-weekly)....


    http://grammar.quickanddirtytips.com/biweekly-versus-semiweekly.aspx

    Although, you get 10 pedant points for using the four dotted ellipsis at the end of a sentence....
  • frits 2011-03-24 10:07
    I almost forgot about one of the most effective strategies for identifying defects. Have developers test each other's code. Make sure to match up developers that really dislike each other.
  • trtrwtf 2011-03-24 10:56
    helix:
    i would use it for space to store trays - you know the big ones that go on your lap to eat dinner when you are in front of the telly alone


    It's still early, but that's the saddest thing I've read all day.
    Don't you have a newspaper and a kitchen table?
  • Anonymous 2011-03-24 11:29
    frits:
    I almost forgot about one of the most effective strategies for identifying defects. Have developers test each other's code. Make sure to match up developers that really dislike each other.

    I find that most offices have at least one person who is a total perfectionist and will bitch and moan at the slightest defect or any infraction of coding standards, however trivial. That person is generally pretty unpopular but they police the code better than any project manager could. I'm a little bit ashamed to admit that in this office, it's me.
  • Hatshepsut 2011-03-24 11:41
    trtrwtf:
    helix:
    i would use it for space to store trays - you know the big ones that go on your lap to eat dinner when you are in front of the telly alone


    It's still early, but that's the saddest thing I've read all day.
    Don't you have a newspaper and a kitchen table?


    The newspaper's a good idea to spread on the sofa to catch dribbles, but I'm not sure the kitchen table would fit on my lap.
  • Moug 2011-03-24 16:21
    Ok the real WTF in that is the picture of the Clock..
    Wow thats ugly.
  • lf8 2011-03-24 18:00
    Pedantic Fool:
    Jason:
    I know many others have said similar things, but this seems to be becoming the bi-weekly WTF (maybe tri-weekly)....


    http://grammar.quickanddirtytips.com/biweekly-versus-semiweekly.aspx

    Although, you get 10 pedant points for using the four dotted ellipsis at the end of a sentence....


    So, he's implying that the article is posted every two weeks not twice a week. Who cares? I think the point was that it seems a long time between drinks.

    Shouldn't there be 3 dots not 4? Or was that your point
  • Jelly 2011-03-24 18:07
    Anonymous:
    frits:
    I almost forgot about one of the most effective strategies for identifying defects. Have developers test each other's code. Make sure to match up developers that really dislike each other.

    I find that most offices have at least one person who is a total perfectionist and will bitch and moan at the slightest defect or any infraction of coding standards, however trivial. That person is generally pretty unpopular but they police the code better than any project manager could. I'm a little bit ashamed to admit that in this office, it's me.


    No offence intended, but in my experience the people who seem to be the biggest perfectionists about others' code are the ones that struggle to produce their own (I'm not saying this is necessarily the category you fit into, btw).

    I have worked with many people who in code reviews would get pedantic about a miscellany of issues including misspellings in change descriptions, copyright needs updating, indenting slightly askew, insisting people identify where memory allocated was freed (which always made me think they couldn't find the spot themselves), insisting that pointers to freed memory be nulled, variable names etc.
    Although some of these things may be good practice, it seemed that these people would like to appear productive at code-reviews to make up for the fact that there was rarely any of their code being reviewed (and when there was, it would almost always either be copied from elsewhere, or actually done by someone else (or at least, any issues that came about in reviews of their code were dismissed as somehow being someone else's fault)).
  • Pedantic Fool 2011-03-24 23:24
    lf8:
    Pedantic Fool:
    Jason:
    I know many others have said similar things, but this seems to be becoming the bi-weekly WTF (maybe tri-weekly)....


    http://grammar.quickanddirtytips.com/biweekly-versus-semiweekly.aspx

    Although, you get 10 pedant points for using the four dotted ellipsis at the end of a sentence....


    So, he's implying that the article is posted every two weeks not twice a week. Who cares? I think the point was that it seems a long time between drinks.

    Shouldn't there be 3 dots not 4? Or was that your point

    Didn't you read my name? BTW-It's definitely 4 dots at the end of a sentence.
  • Anonymous 2011-03-25 08:02
    Jelly:
    Anonymous:
    frits:
    I almost forgot about one of the most effective strategies for identifying defects. Have developers test each other's code. Make sure to match up developers that really dislike each other.

    I find that most offices have at least one person who is a total perfectionist and will bitch and moan at the slightest defect or any infraction of coding standards, however trivial. That person is generally pretty unpopular but they police the code better than any project manager could. I'm a little bit ashamed to admit that in this office, it's me.


    No offence intended, but in my experience the people who seem to be the biggest perfectionists about others' code are the ones that struggle to produce their own (I'm not saying this is necessarily the category you fit into, btw).

    I have worked with many people who in code reviews would get pedantic about a miscellany of issues including misspellings in change descriptions, copyright needs updating, indenting slightly askew, insisting people identify where memory allocated was freed (which always made me think they couldn't find the spot themselves), insisting that pointers to freed memory be nulled, variable names etc.
    Although some of these things may be good practice, it seemed that these people would like to appear productive at code-reviews to make up for the fact that there was rarely any of their code being reviewed (and when there was, it would almost always either be copied from elsewhere, or actually done by someone else (or at least, any issues that came about in reviews of their code were dismissed as somehow being someone else's fault)).

    I hear you but there is a difference between pedantic and assiduous. Policing non-functional stuff (indenting, variable names etc) does not improve the quality of the code in any way, it's just not helpful. Similarly, one person's coding style is no better than another's and it's not helpful to push "your" style onto someone else - as long as everyone follows the coding standards it doesn't matter. As far as I'm concerned, if it doesn't affect the code in terms of function or readability/maintainability, then it doesn't matter. I certainly don't think I'm one of those pedantic overseers but you'd have to ask my colleagues!
  • Tim Barrass 2011-03-25 10:59
    The most robust system I every helped design and build (handling continuous large scale transfers of data between academic institutions) was robust because it assumed from the outset that every single operation would fail.

    Actually, it wasn't so much an assumption as a statement of fact about the components we were building our system from.

    This meant our state model for the system's workflow had to be as tight as could be, and that we explicitly looked to handle failures.
  • The Coward 2011-03-25 15:35
    One thing that annoys me are the 100% code coverage zealots. Ok, test "critical" bits, but checking single line returns that return a member [variable] is a bit pointless. More so with Mock as you are returning the fake item which you know is valid or invalid.

    Usually they are very easy to break in the real world, pass in a negative number/nan/null and watch the house of cards come crashing down.

    I'd rather have more intelligent targeted tests instead of paint by numbers.

    Hell in industry they don't test every single thing, they sample batches (e.g. bakers dozen). Unless of course you are making bolts for a nuclear reactor then you x-ray each of those bad boys!

    I prefer to spend more time on battle hardening code for the real world which involves having real world data going into the system via the normal data channels, not through a mock or stand alone bit. (I suppose more like integration test but not quite).

    Bottom line, qualitative vs quantitative testing.
  • hlovdal 2011-03-26 21:05
    The only way to completely avoid the inherent risk of change is to avoid change altogether


    That is not true. Or rather, the sentence in isolation is true since it only covers "the inherent risk of change", but the overall message is not. The text have two assumptions*:

    1) Changing code implies risk - True
    2) Not changing code is risk free - False!

    and draws as a conclusion that "change is bad" from a risk perspective.

    This is wrong. For instance changing the break pads on your car is not risk free. You might have bought wrong replacements or you might put them on incorrectly. However not changing the break pads is not risk free either (especially if you can hear them scream when being used!).

    If you discover a off by one error in the source code, not correcting that bug might very well be more risky than correcting it. There is no way to say in a objective way; specific judgment is always required.

    I am in no way saying that change is risk free. And I fully buy that even the most innocent change might turn out to have some unintended consequence. In fact even just changing comments might have a impact (not very likely but if the comment is shorter/longer the following __LINE__ tokens will change values and now might be one digit more or less. If such a token is made into a string by the pre-processor the string is now one byte smaller/larger which might trigger that some nearby code is moved into a different memory segment. And that might have a non-negligible run-time impact).

    But to assume that not changing the code is risk free is just so utterly wrong.

    You should always aim for minimizing risk during maintenance, and that is NOT the same as minimizing change!

    ----------------------------

    *
    My interpretation of it, great if I am wrong on this but then the text is very imprecise in my opinion.
  • Juvenal 2011-03-27 22:45
    One way you can tell that "Quis custodiet ipsos custodes?" is my quote and not Plato's is that I, being Roman, spoke Latin, whereas Plato, being Greek, spoke Greek.

    In future please attribute my quotations correctly. Thank you.
  • foo 2011-03-28 07:17
    blakeyrat:

    The problem is that there's no career in it. The only way to make a career in QA is to move up into management, where instead of actually testing the software, you do nothing but herd the drooling morons around.


    Yeah, which means that for those of us who are QA Devs, we get a manager who has no idea how to manage devs (or, one suspects, manage anything; he's barely coping at his job). I guess you herd drooling morons around by getting a bigger drooling moron to create rivers of drool which they follow.
  • SQLDave 2011-03-28 22:42
    The Sixties called. They want their clock back.
  • Johan Bergens 2011-03-29 09:06
    I have a minor comment about who is responsible for deciding the level of quality vs qantity. The customer doesn't always have the knowledge to decide the level of quality, or rather understand the implications of errors. Or he/she needs to describe it in a general form that the developers can understand and work from.

    Saying you have small error in the compressor of a freezer is not enough, you need to know if the problem might cause the freezer to stop working and spill fluids on the floor which might be a big deal for the customer or if it only might cause the freezer to hold a somewhat higher temperature than specified.

    Clients usually wants a cheap system but with high quality. They might ask if you have tested the system and it worked, and as you wrote testing can be many things. This is where I think BDD might help, the customer can read some tests and see if the tests reflects the wanted quality. Things not tested (caused by removing tests to save time or not writing them at all) might break/not work as expected.
  • Andrew 2011-04-05 12:28
    One problem I consistently see when people write about testing is the assumption that functional test are test scripts which are then performed by a tester manually. The assumption is often that only unit tests are automated.

    For many (probably most) software this is true and valid. However, if a piece of software is expected to live and be updated for years, automating the feature tests may be necessary.

    For a large application which has to ship in multiple languages, run on multiple platforms and undergo regular updates the cost of manually testing the app over time can far outstrip the cost of automating the feature tests.
  • Craig 2011-04-09 10:12
    What, nobody picked up that "quis custodiet ipsos custodes" was Juvenal, not Plato?
  • Craig 2011-04-09 10:13
    Oh yes, I see Juvenal himself did :-D
  • Prism 2011-07-12 10:26
    Jay:
    neminem:
    Tim:
    In fact, since the number of code paths through the entire code base grows exponentially you have covered some vanishingly small percentage of the code paths.

    I'd argue anytime your input comes from the user or from another system you have no control over the output of, you've covered exactly 0%. A finite fraction of an infinite is, after all, precisely nothing. (We once found a major blocking bug in a process iff the text input to the process started with a lowercase 'p'. That was a fun one.)

    But I'm happy to work at a computer that distinguishes between devs and testers; we certainly are responsible for testing the effects of our own code before checking in changes (that would be your #1), but the "test team" consists of people that were hired for that purpose. I sort of thought it was like that everywhere (well, everywhere where the whole company isn't a couple devs.)


    There's a difference between "all possible code paths" and "all possible inputs".

    Pedantic digression: The number of possible inputs is not infinite. There are always limits on the maximum size of an integer or lenght of a string, etc. So while the number of possible inputs to most programs is huge, it is not infinite.

    Suppose I write (I'll use Java just because that's what I write the most, all due apologies to the VB and PHP folks):


    String addperiod(String s)
    {
    if (s.length()==0)
    return s;
    else if (s.endsWith("."))
    return s;
    else
    return s+".";
    }


    There are an extremely large number of possible inputs. But there are only three obvious code paths: empty string, string ending with period, and "other". There are at least two other not-quite-obvious code paths: s==null and s.length==maximum length for a string. Maybe there are some other special cases that would cause trouble. But for a test of this function to be thorough, we would not need to try "a", "b", "c", ... "z", "aa", "ab", ... etc for billions and billions of possible values.

    That said, where we regularly get burned on testing is when we don't consider some case that turns out to create a not-so-obvious code path. Like, our code screws up when the customer is named "Null" or crazy things like that.



    "So while the number of possible inputs to most programs is huge, it is not infinite."

    Your example is rather bland. How about throwing a 40 or so character RegEx into the mix and coming up with all the inputs "proving" it works not matter what?

    You can describe all the 'valids' but never all the invalids, and that is the essential problem.
  • Prism 2011-07-12 10:41
    Mr. Orangutan:
    VoiceOfSanity:
    So while the clock might be 99.999% correct, it's that 0.001 that causes an error...

    99.999% != 99.999...%
    99.999... implies the repeating 9 ...
    as we all should know, 0.999... = 1
    therefore, 99 + 1 = 100

    next time i'll just leave my snarky comments to myself ... i'm out of orange crayons, and besides they're obviously not appreciated


    "as we all should know, 0.999... = 1"

    Its starting to become accepted that that is not true.

    If 1/infinity==0 then it is possible to divide a small enough number and end up with nothing.

    I am on the side that 1/infinity==0.000..<infinity>..0001

    so that 0.999....+(1/infinity)==1

    seems obvious to me that the series 9/10+9/100+9/1000... will never give 1 by definition. Never.
  • Mchl 2011-07-25 10:36
    Alex Papadimoulis:
    I doubt you would bat an eye if the Mars Rover team tested commands before sending them using a replica Mars Rover sitting on a pile of replica Mars rocks


    While it's not a common thing, they actually do that sometimes: http://en.wikipedia.org/wiki/Spirit_rover#Stuck_in_dusty_soil_with_poor_cohesion
  • MH 2014-10-08 17:47
    Interesting article. One area of testing I'm interested in is absense testing- i.e. if the program is supposed to identify something, such as the number of widgets with defects, or in banking to return the exact transactions that are fraudlent, is it returning all of them or only a portion of them? Testing the output does not account for items not on the output.