• (cs)

    Oh my. frits?

    Anyway, while I agree that 100% code coverage is meaningless when test defects exist, is it simply a gestalt "feeling" about when your code is good to go, or what?

  • Ed (unregistered)

    Someone needs to explain that last bit to my boss. Badly.

  • Anonymous (unregistered)

    Damn, that's a nice fridge. Anyway, I agree with the sentiment and I always unit test functionality that cannot be adequately exercised by an end user. But everything else I leave to the chimps in the testing department. Why make senior developers do the work of a trained monkey? There's a reason we get paid twice as much as them, you know.

  • SpasticWeasel (unregistered)

    So Plato spoke latin huh? It was Juvenal.

  • Studley (unregistered)

    Just checking whether I also need to type in " " to get a linebreak in my comment.

  • Oh God It Hurts (unregistered)

    "At best (i.e., with unlimited resources), you can be 99.999…% confident that there will be no defects in production."

    So, 100% confident then.

  • dadasd (unregistered)

    One real WTF is the number of developers (yes, 341777, I'm looking at you) who still think unit testing is a testing technique, rather than a programming one.

  • (cs)
    Change Impact – the estimated scope of a given change; this varies from change to change and, like testing, is always a “good enough” estimate, as even a seemingly simple change could bring down the entire system
    I think about this every time somebody tells me to "just refactor".
  • sd (unregistered) in reply to SpasticWeasel
    SpasticWeasel:
    So Plato spoke latin huh? It was Juvenal.

    No, you're juvenile!

  • Tim (unregistered)

    Even if you have coverage of 100% of the lines of code that doesn't mean you have covered 100% of the code paths. In fact, since the number of code paths through the entire code base grows exponentially you have covered some vanishingly small percentage of the code paths. For example, it is pretty easy to get 100% coverage of the following lines without uncovering the null pointer exception

    if (x <= 0) { obj = NULL; } if (x >= 0) { obj.doSomething(); }

  • Maurizio (unregistered)

    I have a problem with this:

    Integration Testing – does the program function?

    What does "the program function" means ? Doesn't crash ? That is easy. But what if just behave differently from expected ? Than, what is expected ? What is expected is define in the functional specifications, so what is the real difference between integration and functional testing ?

    My personal definition, that i am using in my work in a big IT departement, is that integration test verify that a codebase correspond to a technical design, i.e. that the different modules interacts as the architect/developer decided, while the functional tests verify that the design and the code actually correspond to the functional requirements.

    Opinions ?

  • Zelda (unregistered) in reply to Anonymous

    As a QA Monkey I should be offended, but then I realized that programmers use all their salary to buy dates while I use mine for bananas.

  • (cs) in reply to Power Troll
    Power Troll:
    Oh my. frits?

    Anyway, while I agree that 100% code coverage is meaningless when test defects exist, is it simply a gestalt "feeling" about when your code is good to go, or what?

    You rang?

  • (cs) in reply to dadasd
    dadasd:
    One real WTF is the number of developers (yes, 341777, I'm looking at you) who still think unit testing is a testing technique, rather than a programming one.
    Why can't it be both?
  • The Ancient Programmer (unregistered) in reply to Ed
    Ed:
    Someone needs to explain that last bit to my boss. Badly.

    Why explain it to him badly?

  • (cs) in reply to Anonymous
    Anonymous:
    Anyway, I agree with the sentiment and I always unit test functionality that cannot be adequately exercised by an end user. But everything else I leave to the chimps in the testing department. Why make senior developers do the work of a trained monkey?
    I've found that as a developer, I'd much prefer a testing department comprised of smart humans with strong technical skills, rather than chimps. With technical testers, you get surprisingly-useful reports about defects and failing tests, making it much easier to identify the problem and correct it.

    With trained monkeys, you only get phone calls saying "program no work; fix it". The flying poo can become a bit of a problem too.

  • (cs)

    An interesting definition of Acceptance Testing ("formal or informal testing to ensure that functional requirements as implemented are valid and meet the business need").

    Where I come from, Acceptance Testing verifies that the delivered product meets the contractually agreed requirements. Verification that these requirements meet the business need is a separate matter.

    Also interesting is the designation of functional and non-functional requirements - a concept that seems to come from a strongly data-processing oriented background, rather than, say, a system-control oriented background, in which performance is often a critical functional requirement.

  • (cs) in reply to Maurizio
    Maurizio:
    I have a problem with this: > Integration Testing – does the program function?

    What does "the program function" means ? Doesn't crash ? That is easy. But what if just behave differently from expected ? Than, what is expected ? What is expected is define in the functional specifications, so what is the real difference between integration and functional testing ?

    My personal definition, that i am using in my work in a big IT departement, is that integration test verify that a codebase correspond to a technical design, i.e. that the different modules interacts as the architect/developer decided, while the functional tests verify that the design and the code actually correspond to the functional requirements.

    Opinions ?

    Testing is such a subjective area of programming, that it means different things to different people/businesses/departments. The question, "Does the program function?" could mean "Does it crash?", or "Does it do XYZ, ZYX, or some other functionality?"

    Before you test, you first have to define your tests. But, to define your tests, you have to define what is a reasonable outcome for the test you want to write. Is there more than one way to access a piece of code? Does it produce more than one type of output - or any output, for that matter?

    Testing is almost like software design; you have to sit down and plan out what you want to test, and how you're going to implement it. Which, unfortunately, to most business types is a 100% (or 99.(9)%) waste of time and effort.

  • Jon (unregistered)

    I'm throwing a WtfNotFunny exception.

  • (cs) in reply to Tim
    Tim:
    In fact, since the number of code paths through the entire code base grows exponentially you have covered some vanishingly small percentage of the code paths.
    I'd argue anytime your input comes from the user or from another system you have no control over the output of, you've covered exactly 0%. A finite fraction of an infinite is, after all, precisely nothing. (We once found a major blocking bug in a process iff the text input to the process started with a lowercase 'p'. That was a fun one.)

    But I'm happy to work at a computer that distinguishes between devs and testers; we certainly are responsible for testing the effects of our own code before checking in changes (that would be your #1), but the "test team" consists of people that were hired for that purpose. I sort of thought it was like that everywhere (well, everywhere where the whole company isn't a couple devs.)

  • (cs)

    The 100% code coverage metric is so dumb. Have these people never heard of permutation? I assume that 100% of the code is tested at least informally. I mean do some developers really write code that they've never run before releasing it?

  • (cs) in reply to Anonymous
    Anonymous:
    Damn, that's a nice fridge.

    Shame about the furniture.

    And the clouds on the ceiling.

  • JadedTester (unregistered)
    I mean do some developers really write code that they've never run before releasing it?

    Yes. Unfortunately, the amount of testing a developer does tends to have a negative correlation with the amount of testing their code actually needs. There's usually no clearer sign of a developer who belongs on the dole queue than one who thinks their code will work first time.

  • Anonymous (unregistered) in reply to boog
    boog:
    Anonymous:
    Anyway, I agree with the sentiment and I always unit test functionality that cannot be adequately exercised by an end user. But everything else I leave to the chimps in the testing department. Why make senior developers do the work of a trained monkey?
    I've found that as a developer, I'd much prefer a testing department comprised of smart humans with strong technical skills, rather than chimps. With technical testers, you get surprisingly-useful reports about defects and failing tests, making it much easier to identify the problem and correct it.

    With trained monkeys, you only get phone calls saying "program no work; fix it". The flying poo can become a bit of a problem too.

    I agree, I'd love to work with technically-minded testers, but I am begrudgingly speaking from experience. We only have one capable tester, the rest are rejects who only test because they don't have the skill to do anything else in IT. They sit at their desks grinding the organ according to the pre-defined scripts authored by the one guy that actually knows what he's doing. The monkey analogy is all too accurate, to be honest.

  • Exception? (unregistered) in reply to Jon
    Jon:
    I'm throwing a WtfNotFunny exception.
    Implying that this is somehow an exception to the rule.

    Bazzzzing!

  • JadedTester (unregistered) in reply to Anonymous
    Anonymous:
    boog:
    Anonymous:
    Anyway, I agree with the sentiment and I always unit test functionality that cannot be adequately exercised by an end user. But everything else I leave to the chimps in the testing department. Why make senior developers do the work of a trained monkey?
    I've found that as a developer, I'd much prefer a testing department comprised of smart humans with strong technical skills, rather than chimps. With technical testers, you get surprisingly-useful reports about defects and failing tests, making it much easier to identify the problem and correct it.

    With trained monkeys, you only get phone calls saying "program no work; fix it". The flying poo can become a bit of a problem too.

    I agree, I'd love to work with technically-minded testers, but I am begrudgingly speaking from experience. We only have one capable tester, the rest are rejects who only test because they don't have the skill to do anything else in IT. They sit at their desks grinding the organ according to the pre-defined scripts authored by the one guy that actually knows what he's doing. The monkey analogy is all too accurate, to be honest.

    Pay peanuts, etc. Sack your team, and hire people who actually want to test, have technical skills and take it seriously as both a career and a technical field. Works for Microsoft.

  • (cs) in reply to Anonymous
    Anonymous:
    I agree, I'd love to work with technically-minded testers, but I am begrudgingly speaking from experience. We only have one capable tester, the rest are rejects who only test because they don't have the skill to do anything else in IT.
    This is what I've experienced too; it's easy to tell which testers actually know what they're doing. Unfortunately, management wrongly sees testing as a mindless (and profitless) task, so they usually try to hire testers from the local zoo.
    Anonymous:
    The monkey analogy is all too accurate, to be honest.
    My advice: wear a raincoat.
  • (cs) in reply to frits
    frits:
    The 100% code coverage metric is so dumb. Have these people never heard of permutation? I assume that 100% of the code is tested at least informally. I mean do some developers really write code that they've never run before releasing it?

    I don't think that it's dumb. I think that it is misunderstood. 100% code coverage, to me, means that every class/function/method has a corresponding test. The real questions are: Are the tests sufficient? Are the tests meaningful?

    Having 100% code coverage is a worthy goal to attempt to achieve, so long as you don't just try to write tests for the sake of writing tests. Can you possibly cover every single permutation of error that could possibly occur? No, and you shouldn't necessarily try (it goes back to what Alex said about the cost/effort going up as you get closer to 100%).

    As an example, if I wrote a class that had three methods in it, then to me it is reasonable that to achieve 100% code coverage I would need three unit tests. If I add a new method, and I add a new unit test, then I'm still at 100% code coverage.

    Do these tests test for every single possible outcome? Probably not. Should I care? Yes. Will I write a test for every single possible outcome? Hell no.

    To me there are two types of development: test-driven development, where I write tests [theoretically] before I write code - this is a practice that helps to shape the program, while simultaneously giving me code coverage AND documentation; and, exception-driven development, where tests are written to reproduce bugs as they are found. This helps to document known issues, reproduce said issues, and provide future test vectors to insure that future changes don't re-introduce already fixed bugs.

  • trwtf (unregistered)

    A few years ago my old company off-shored all their testing to India. Not the development, just testing - so the off-shore team were writing and running scripts for a black-box that they didn't even begin to understand. Three bug-ridden releases later and... well, let's just say that company isn't around anymore.

  • (cs)

    I agree testing designed to reach some level of quality that will never be 100%.

    Where I often see a lack in software is what I would call recoverability: The ability to correct a problem after it has occurred.

    I'm generally a cautious individual. Experience has demonstrated that I should always have a fallback plan, which is really a chain of defenses thing.

    If the application has no recoverability, then there is no fallback: The only thing between "everything is good" and "absolute disaster" is a fence called "everything works perfectly". When designing applications, one of the things that should always be done is to consider, "What if this doesn't work? What would be our fallback plan?"

    Because, if you don't think about that and plan for that, then one day something doesn't work perfectly and you find yourself in absolute disaster land because you have no other line of defense. That is actually the source of some really good stories (in here as well as in other places). I'll relate one:

    (At a previous life.) We bought a 3rd-party product for donation management. The builders of that product had a really interesting way of handling errors: They ignored all errors.

    One of the processes was the daily account apply. You entered incoming donations in a batch; the apply process would then read the batch, update the accounts and delete the batch.

    On the disaster day in question, the accounts table reached size limit part way (early on) through the processing of the batch and, since the developers ignored such mundane messages from the database as "the table is full and I can't insert this now", the process blythely continued on.

    Then it deleted the batch.

    No fallback. No way to recover the batch and so an entire day of entry by the user was lost.

    Okay, so now let's create a fallback. That's hard, right? No, in this case actually it isn't: The solution is to back up the entire database before running the apply process. Every single time a batch is to be applied! That way, if something goes wrong, you fix the problem, restore, rerun and everything is cool.

    ...and usually, fallback is just like that. It mostly consists of one single element that I often see omitted: Keep the input state so that rerun is possible. There are "bazillions" of ways to do that; take your own pick.

    But some people like to live on the edge and depend on the application doing everything right, and when it doesn't, well, glad I'm not them.

  • Anonymous (unregistered) in reply to boog
    boog:
    Unfortunately, management wrongly sees testing as a mindless (and profitless) task, so they usually try to hire testers from the local zoo.
    Absolutely true, this is pretty much the crux of the problem I think.
    boog:
    My advice: wear a raincoat.
    My screen is now intimately familiar with my last mouthful of coffee, thanks!
  • Machtyn (unregistered) in reply to The Ancient Programmer
    The Ancient Programmer:
    Ed:
    Someone needs to explain that last bit to my boss. Badly.

    Why explain it to him badly?

    If you explain it badly, perhaps he will have a gooder understanding.

  • Walter Kovacs (unregistered)
    Plato:
    quis custodiet ipsos custodes?
    ... and all the classes and procedures will look up and shout "Test us!" ... and I'll look down and whisper "No."
  • QJ (unregistered) in reply to boog
    boog:
    Change Impact – the estimated scope of a given change; this varies from change to change and, like testing, is always a “good enough” estimate, as even a seemingly simple change could bring down the entire system
    I think about this every time somebody tells me to "just refactor".

    If a seemingly simple change can bring down the entire system, there's something fundamentally wrong with that system and it's ripe for a bit of guerrilla refactoring.

  • QJ (unregistered) in reply to The Ancient Programmer
    The Ancient Programmer:
    Ed:
    Someone needs to explain that last bit to my boss. Badly.

    Why explain it to him badly?

    Waste of time explaining it to him well. Pearls before swine.

  • Machtyn (unregistered)

    I would like to sub scribe to this the ory about never leaving the house to avoid getting hit by a bus.

  • Brass Monkey (unregistered) in reply to QJ
    QJ:
    boog:
    Change Impact – the estimated scope of a given change; this varies from change to change and, like testing, is always a “good enough” estimate, as even a seemingly simple change could bring down the entire system
    I think about this every time somebody tells me to "just refactor".

    If a seemingly simple change can bring down the entire system, there's something fundamentally wrong with that system and it's ripe for a bit of gorilla refactoring.

    FTFY

  • QJ (unregistered) in reply to frits
    frits:
    The 100% code coverage metric is so dumb. Have these people never heard of permutation? I assume that 100% of the code is tested at least informally. I mean do some developers really write code that they've never run before releasing it?

    In some of the places I've worked I've had to opine that it would be nice if the developers checked whether their code actually compiles in the first place.

  • C-Octothorpe (unregistered) in reply to boog
    boog:
    Anonymous:
    I agree, I'd love to work with technically-minded testers, but I am begrudgingly speaking from experience. We only have one capable tester, the rest are rejects who only test because they don't have the skill to do anything else in IT.
    This is what I've experienced too; it's easy to tell which testers actually know what they're doing. Unfortunately, management wrongly sees testing as a mindless (and profitless) task, so they usually try to hire testers from the local zoo.
    Anonymous:
    The monkey analogy is all too accurate, to be honest.
    My advice: wear a raincoat.

    ... and goggles. Some of them have sniper aim...

    SIDENOTE: are there really people who WANT to test? I mean, speaking as a developer looking at the role of QA, it just doesn't seem all that appealing to me. I am not the smartest guy by far, but from my personal experience, QA tends to attract people with the intelligence levels reserved for the mildly retarded (I'd say in the neighbourhood of 100, give or take 10 pts). This usually results in people three kinds of QA monkeys:

    1. The new grad/career switch above-average smart person trying to get into software development. Some people take this path, which is fine, and in some corps a necessary step...

    2. The shitty, change resistant x-developer who couldn't hash it anymore and (in)voluntarily goes into QA. This is more rare as pride usually takes over and they just quit and f*ck up another dev shop with their presence.

    3. The mildly retarded sociopaths who don't have the intelligence or social skills necessary to make it in any other role IT related. They may be able to make something of themselves in another profession, but because they got mad skillz at Halo, they MUST be good at everything IT related, which keeps them in the QA gig.

    I realize this sounds very elitist, which isn't my intention but this is just what I have noticed/pondered...

  • QJ (unregistered) in reply to Brass Monkey
    Brass Monkey:
    QJ:
    boog:
    Change Impact – the estimated scope of a given change; this varies from change to change and, like testing, is always a “good enough” estimate, as even a seemingly simple change could bring down the entire system
    I think about this every time somebody tells me to "just refactor".

    If a seemingly simple change can bring down the entire system, there's something fundamentally wrong with that system and it's ripe for a bit of gorilla refactoring.

    FTFY
    +1! Sprayed coffee.

  • Jay (unregistered) in reply to neminem
    neminem:
    Tim:
    In fact, since the number of code paths through the entire code base grows exponentially you have covered some vanishingly small percentage of the code paths.
    I'd argue anytime your input comes from the user or from another system you have no control over the output of, you've covered exactly 0%. A finite fraction of an infinite is, after all, precisely nothing. (We once found a major blocking bug in a process iff the text input to the process started with a lowercase 'p'. That was a fun one.)

    But I'm happy to work at a computer that distinguishes between devs and testers; we certainly are responsible for testing the effects of our own code before checking in changes (that would be your #1), but the "test team" consists of people that were hired for that purpose. I sort of thought it was like that everywhere (well, everywhere where the whole company isn't a couple devs.)

    There's a difference between "all possible code paths" and "all possible inputs".

    Pedantic digression: The number of possible inputs is not infinite. There are always limits on the maximum size of an integer or lenght of a string, etc. So while the number of possible inputs to most programs is huge, it is not infinite.

    Suppose I write (I'll use Java just because that's what I write the most, all due apologies to the VB and PHP folks):

    String addperiod(String s)
    {
      if (s.length()==0)
        return s;
      else if (s.endsWith("."))
        return s;
      else
        return s+".";
    }
    

    There are an extremely large number of possible inputs. But there are only three obvious code paths: empty string, string ending with period, and "other". There are at least two other not-quite-obvious code paths: s==null and s.length==maximum length for a string. Maybe there are some other special cases that would cause trouble. But for a test of this function to be thorough, we would not need to try "a", "b", "c", ... "z", "aa", "ab", ... etc for billions and billions of possible values.

    That said, where we regularly get burned on testing is when we don't consider some case that turns out to create a not-so-obvious code path. Like, our code screws up when the customer is named "Null" or crazy things like that.

  • Anonymous Hacker (unregistered) in reply to C-Octothorpe
    SIDENOTE: are there really people who WANT to test?
    Testing, the way it's usually defined? Absolutely not. But with the right job description, testing can approach pure hacking: make this program break, by any means necessary. I've certainly had fun taking an hour or two away from my own code to find the security, performance, etc. holes in my colleagues', and I can see that being an enjoyable full-time job.
  • JadedTester (unregistered) in reply to C-Octothorpe
    C-Octothorpe:
    boog:
    Anonymous:
    I agree, I'd love to work with technically-minded testers, but I am begrudgingly speaking from experience. We only have one capable tester, the rest are rejects who only test because they don't have the skill to do anything else in IT.
    This is what I've experienced too; it's easy to tell which testers actually know what they're doing. Unfortunately, management wrongly sees testing as a mindless (and profitless) task, so they usually try to hire testers from the local zoo.
    Anonymous:
    The monkey analogy is all too accurate, to be honest.
    My advice: wear a raincoat.

    ... and goggles. Some of them have sniper aim...

    SIDENOTE: are there really people who WANT to test? I mean, speaking as a developer looking at the role of QA, it just doesn't seem all that appealing to me. I am not the smartest guy by far, but from my personal experience, QA tends to attract people with the intelligence levels reserved for the mildly retarded (I'd say in the neighbourhood of 100, give or take 10 pts). This usually results in people three kinds of QA monkeys:

    1. The new grad/career switch above-average smart person trying to get into software development. Some people take this path, which is fine, and in some corps a necessary step...

    2. The shitty, change resistant x-developer who couldn't hash it anymore and (in)voluntarily goes into QA. This is more rare as pride usually takes over and they just quit and f*ck up another dev shop with their presence.

    3. The mildly retarded sociopaths who don't have the intelligence or social skills necessary to make it in any other role IT related. They may be able to make something of themselves in another profession, but because they got mad skillz at Halo, they MUST be good at everything IT related, which keeps them in the QA gig.

    I realize this sounds very elitist, which isn't my intention but this is just what I have noticed/pondered...

    If testing is limited to "Click x. See if actual result matches expected result" then no, I doubt many people do. On the other hand, if it's "Build an automated framework for this system, then go to the architecture committee meeting to talk about testability. After that, meet with the product manager and BA to distill their requirements into Cucumber and then spend the afternoon on exploratory testing trying to break the new system" then why not? Done properly, QA is as much of an intellectual challenge and just as rewarding as development. The problem is that most testers are crap, and most test managers are worse, so they end up with the former definition of testing.

  • Anonymous (unregistered) in reply to Hatshepsut
    Hatshepsut:
    An interesting definition of Acceptance Testing ("formal or informal testing to ensure that functional requirements as implemented are valid and meet the business need").

    Where I come from, Acceptance Testing verifies that the delivered product meets the contractually agreed requirements. Verification that these requirements meet the business need is a separate matter.

    Also interesting is the designation of functional and non-functional requirements - a concept that seems to come from a strongly data-processing oriented background, rather than, say, a system-control oriented background, in which performance is often a critical functional requirement.

    I don't think you understand what functional and non-functional means in the context of software requirements. The fact that it is critical does not mean it becomes a functional requirement, it means it has to be met for the system to work correctly.

    Also some companies manufacture and sell software themselves! i.e. you can't just say; "that's what it says in the requirements". Not all software development is outsourced. For example a computer game has to go through "Acceptance Testing" with the people who are likely to use it e.g. so called "beta testing".

  • VoiceOfSanity (unregistered) in reply to Hatshepsut
    Hatshepsut:
    An interesting definition of Acceptance Testing ("formal or informal testing to ensure that functional requirements as implemented are valid and meet the business need").

    Where I come from, Acceptance Testing verifies that the delivered product meets the contractually agreed requirements. Verification that these requirements meet the business need is a separate matter.

    Also interesting is the designation of functional and non-functional requirements - a concept that seems to come from a strongly data-processing oriented background, rather than, say, a system-control oriented background, in which performance is often a critical functional requirement.

    From someone who has worked in the defense contractor environment for a few years now, this is very much a truth. Whether or not the item/program meets the business need of the customer is not as important as whether that item/program meets the contract requirements. You could build the most sophisticated fighter jet around for $8 million a jet, but if it doesn't meet the contract requirements then the military could care less if it saves money/lives/time. The same is true for software, as long as it meets the contract requirements (even if it's just barely within those requirements) then the software is accepted, buggy interface and occasional crashes along with it.

    Fortunately (at least in most cases) when it comes to the space program, just because it meets the contract requirements isn't enough, especially when it comes to man-rated spacecraft and space probes that'll be working for a decade or two (for example, the Cassini Saturn probe and the MESSENGER Mercury probe.) "Just good enough" doesn't cut it in situations like that, so the testing that is done is to ensure that things work, they work right and that they'll continue working in the future.

    Then again, those probes undergo the most intense real-world testing/use that anyone could envision. But that's after delivery/launch. chuckle

  • bar (unregistered) in reply to C-Octothorpe
    C-Octothorpe:
    boog:
    My advice: wear a raincoat.

    ... and goggles. Some of them have sniper aim...

    SIDENOTE: are there really people who WANT to test? I mean, speaking as a developer looking at the role of QA, it just doesn't seem all that appealing to me. I am not the smartest guy by far, but from my personal experience, QA tends to attract people with the intelligence levels reserved for the mildly retarded (I'd say in the neighbourhood of 100, give or take 10 pts). This usually results in people three kinds of QA monkeys:

    1. The new grad/career switch above-average smart person trying to get into software development. Some people take this path, which is fine, and in some corps a necessary step...

    2. The shitty, change resistant x-developer who couldn't hash it anymore and (in)voluntarily goes into QA. This is more rare as pride usually takes over and they just quit and f*ck up another dev shop with their presence.

    3. The mildly retarded sociopaths who don't have the intelligence or social skills necessary to make it in any other role IT related. They may be able to make something of themselves in another profession, but because they got mad skillz at Halo, they MUST be good at everything IT related, which keeps them in the QA gig.

    I realize this sounds very elitist, which isn't my intention but this is just what I have noticed/pondered...

    If there is a trick to decent QA testing, its to Choose to do it in the first place. Choose to restrict code submits to only those being accompanied by functional unit tests. Choose to use a system of shared peer review for said code submits. Choose to have a continuous and scheduled build of your product, and to insist that fixing a broken build takes priority. Choose to have representatives from the coding teams participate in the final QA process. Choose to get signoff from all teams before releasing. Choose to have feature freezes, periodically-reviewed API documentation and believable deadlines. Choose a risk free release cycle. Choose QA for everyone (especially the managers).

    Captcha was of no consequat.

  • VoiceOfSanity (unregistered) in reply to JadedTester
    JadedTester:
    Pay peanuts, etc. Sack your team, and hire people who actually want to test, have technical skills and take it seriously as both a career and a technical field. Works for Microsoft.
    And here I thought Microsoft's beta testers were the general public once the RTM was issued out. *gryn*
  • anonym (unregistered) in reply to frits
    frits:
    I mean do some developers really write code that they've never run before releasing it?

    sadly, yes

  • (cs) in reply to C-Octothorpe
    C-Octothorpe:
    SIDENOTE: are there really people who WANT to test?
    Yes, but it really does depend on whether you're starting out from the position of having tests already defined. When you're working with an already-highly-tested codebase that's got a good support framework in place, it's quite nice to focus strongly on TDD and trying to ensure that all new code paths that you add are fully tested. (Hint: it can result in the most amazing hacks to reliably trigger particularly awkward cases.)

    But if the code has very few tests, or if you're doing integration tests across a whole mess of dependencies, testing is a real drag; huge amounts of work for little reward.

  • Mr. Orangutan (unregistered)
    Alex Papadimoulis:
    no matter how hard you try, a definitive answer is impossible. At best (i.e., with unlimited resources), you can be 99.999…% confident that there will be no defects in production
    but but 99.999...% IS 100%

Leave a comment on “Testing Done Right”

Log In or post as a guest

Replying to comment #341791:

« Return to Article