• Bert Glanstron (unregistered) in reply to Darth Kaner

    Dear Darth Kaner,

    In case you can’t tell, this is a grown-up place. The fact that you insist on using your ridiculous handle clearly shows that you’re too young and too stupid to be commenting on TDWTF.

    Go away and grow up.

    Sincerely, Bert Glanstron

  • C-Octothorpe (unregistered) in reply to trtrwtf
    trtrwtf:
    anon#213:
    trtrwtf:
    anon#213:
    I thought the wtf was the fridge being too short. That small amount of useless space would drive me crazy.

    Your pretty easily driven crazy, aren't you?

    DAMMIT ITS "YOU'RE", NOT "YOUR"!!!!!!!

    Thanks, C-Octo. I owe you a beer. (I love to see them pop like that)

    Yeah, it usually starts with a slight twitch in one eye, and goes from there...

  • mwanaheri (unregistered) in reply to Coyne

    from the last two years of maintaining a background-application (converting edifact messages), I'd suggest two main features:

    1. fallback/repair strategies for anything that goes wrong
    2. monitoring. Absolutely essential. If something goes wrong in a background process, let me know immediately so that the data can be fixed while no further damage has been caused.

    I don't care for speed, don't mind a clumsy ui, won't be happy with a strict verification of the standard (most partners have their 'specials' anyway). If then I can nicely configure the converter, I'm happy. Even with some bugs.

  • MadJo (Professional tester) (unregistered) in reply to JadedTester

    Exploratory testing implies more in depth knowledge of the system than I normally had, only after a month or so I could start doing exploratory testing, well inside the second iteration. By that time, I'd have had to rewrite hundreds of these testcases.

    BDD, yes, as a matter of fact, this IS the first time I heard of it. And after reading up on it, I could not have applied it to the project I was working with.

    Checklists are all fine and dandy, but when you have a system interface test to do, with hundreds of variables, where the interface specs change by the day. You can't keep up with checklists alone. Especially if they demand a full test every god damn release. (And sadly I was at the bottom of the food chain)

  • Ryan (unregistered) in reply to Coyne
    Coyne:
    Okay, so now let's create a fallback. That's hard, right? No, in this case actually it isn't: The solution is to back up the entire database before running the apply process. Every single time a batch is to be applied! That way, if something goes wrong, you fix the problem, restore, rerun and everything is cool.

    Or, you know, use a database transaction to update the accounts. Rollback is much easier than restoring a database from back up. Though if your developers insist on ignoring errors, you should probably do the backup too.

    Fault tolerance (the design principle that encompasses recoverability) definitely doesn't get the attention it deserves in most software development. Fault-tolerant operating systems are pretty popular in embedded systems (no joke) to protect applications from each other and restart things gracefully when something does crash. The closest thing we get on the desktop is "sorry about your luck; would you like to report this crash?" and the occasional document or browser tab recovery.

  • pgn674 (unregistered)
    Alex Papadimoulis:
    Like many things that converge on perfection, there are significantly increasing costs as you approach 100%. A five-minute smoke test may only provide 40% certainly, but it may cost five hours of testing to achieve 60%, and fifty hours to achieve 80%.
    You might want to choose better example numbers here. When you do the first jump, you go from 5 minutes to 5 hours (5hr/5min= 60 times as much), and from 40% to 60% (60/40= 1.5 times as much). That's a 1.5/60= 0.025 goodness efficiency you get from this jump.

    But in the second example, you go from 5 hours to 50 hours (50/5= 10 times as much), and from 60% to 80% (80/60= 1.333 times as much). That's a 1.333/10= 0.133 goodness efficiency you get from the second jump. The second jump is better than the first jump, and the point you wanted to make through your example was that the jumps would get progressively worse.

    You want this to come out True, where 'p' is percentage, and 't' is time:

    (p2/p1)/(t2/t1)>(p3/p2)/(t3/t2)

  • (cs) in reply to pgn674
    pgn674:
    Alex Papadimoulis:
    Like many things that converge on perfection, there are significantly increasing costs as you approach 100%. A five-minute smoke test may only provide 40% certainly, but it may cost five hours of testing to achieve 60%, and fifty hours to achieve 80%.
    You might want to choose better example numbers here. When you do the first jump, you go from 5 minutes to 5 hours (5hr/5min= 60 times as much), and from 40% to 60% (60/40= 1.5 times as much). That's a 1.5/60= 0.025 goodness efficiency you get from this jump.

    But in the second example, you go from 5 hours to 50 hours (50/5= 10 times as much), and from 60% to 80% (80/60= 1.333 times as much). That's a 1.333/10= 0.133 goodness efficiency you get from the second jump. The second jump is better than the first jump, and the point you wanted to make through your example was that the jumps would get progressively worse.

    You want this to come out True, where 'p' is percentage, and 't' is time:

    (p2/p1)/(t2/t1)>(p3/p2)/(t3/t2)
    What? No you don't. What you want is to show that each progressive increase takes a lot longer, not that the.... degradation of efficiency is not as much. <snipped>

  • mystery guest (unregistered) in reply to Ed
    Ed:
    Someone needs to explain that last bit to my boss. Badly.
    I can explains to him. Badly. Very Badly, in fact.
  • Mr Frost (unregistered)

    I don't understand how testing would have had any impact in identifying a bug that was created post-deployment (the fridge was scratched in transit after it had been built).

    There was a bug in my computer - it stopped working after I dropped it. I think this is a result of the windows operating system not having been sufficiently tested for this scenario. Maybe I don't understand testing.

  • Rohypnol (unregistered) in reply to dadasd
    dadasd:
    One real WTF is the number of developers (yes, 341777, I'm looking at you) who still think unit testing is a testing technique, rather than a programming one.

    I think that dude was a tester - he talked about large sums of money...

  • Tester 1A (unregistered) in reply to Maurizio
    Maurizio:
    I have a problem with this: > Integration Testing – does the program function?

    What does "the program function" means ? Doesn't crash ? That is easy. But what if just behave differently from expected ? Than, what is expected ? What is expected is define in the functional specifications, so what is the real difference between integration and functional testing ?

    My personal definition, that i am using in my work in a big IT departement, is that integration test verify that a codebase correspond to a technical design, i.e. that the different modules interacts as the architect/developer decided, while the functional tests verify that the design and the code actually correspond to the functional requirements.

    Opinions ?

    Surely integration testing tests that the program 'works' once all the components are integrated. That is, individual components have been tested (unit testing?) and are believed to work. They are then chucked together in one massive conglomeration. The Integration test basically looks to make sure that 'things' work - that is, we don't crash unexpectedly, any results we present appear valid (though not necessarily correct) etc.

    Functional testing tests that the application works as expected. That the results it returns are correct (given the requirements) and that the application doesn't simply do something (which is what integration testing does) but that the application does what the requirements say that the application does.

    Integration testing can be done (reasonably) blind. We don't deliberately test using real, or even realistic data, and we don't necessarily need an awareness of how the application will be used. We check that menu items do things (we don't care whether that's necessarily what they're meant to). Functional testing, on the other hand, would check that selecting a menu item does exactly what the requirements (or design, or every other document) indicates that it does what it should do.

    There is a fine line, to a degree, insofar as selecting a 'Print' button probably has a certain expectation that a print dialogue is displayed (even in integration testing). Functional testing would require this dialogue to have a specific appearance, and possibly a specific functionality of its own....

  • Dehli Belly (unregistered) in reply to boog
    boog:
    Anonymous:
    Anyway, I agree with the sentiment and I always unit test functionality that cannot be adequately exercised by an end user. But everything else I leave to the chimps in the testing department. Why make senior developers do the work of a trained monkey?
    I've found that as a developer, I'd much prefer a testing department comprised of smart humans with strong technical skills, rather than chimps. With technical testers, you get surprisingly-useful reports about defects and failing tests, making it much easier to identify the problem and correct it.

    With trained monkeys, you only get phone calls saying "program no work; fix it". The flying poo can become a bit of a problem too.

    Hmm....I think it's a 'horses for courses' thing. I think some testing by highly technical people is required (it should be noted that some testers are highly technical people). I think some testing should be done by people with average technical skill (ie follow the instruction type testing that a monkey should be able to do - if they find a problem, it is easy enough to walk through it with them to work out what [s]they messed up[/s] the issue is). Some testing then needs to be done with real monkeys. Preferably the tech savvy ones who are proficient in opening all manner of files by double clicking them, and have the expertise to inadvertently cripple any system you might allow them use, and have a very good memory (they have memorized word for word "I didn't do anything, but the computer.... and I don't remember what I did before"

    I would have thought testing requires all sorts of testing by all sorts of different parties to give the product the best chance at quality. Technical only testing doesn't work, because technical people aren't necessarily looking for results to mean the same thing. "Broken" doesn't mean the same thing either, nor any one of a number of descriptions for problems that occur. This site brings many opf them to light....

  • AceCoder (unregistered) in reply to JadedTester
    JadedTester:
    I mean do some developers really write code that they've never run before releasing it?

    Yes. Unfortunately, the amount of testing a developer does tends to have a negative correlation with the amount of testing their code actually needs. There's usually no clearer sign of a developer who belongs on the dole queue than one who thinks their code will work first time.

    Perhaps part of this problem stems from estimates at the planning stage that make the same assumption? It will work the first time it is written, if not, an extra 5 minutes should suffice for me to track down and repair any issues...

  • Jimbo (unregistered) in reply to QJ
    QJ:
    boog:
    Change Impact – the estimated scope of a given change; this varies from change to change and, like testing, is always a “good enough” estimate, as even a seemingly simple change could bring down the entire system
    I think about this every time somebody tells me to "just refactor".

    If a seemingly simple change can bring down the entire system, there's something fundamentally wrong with that system and it's ripe for a bit of guerrilla refactoring.

    indeed: Consider original code

    int a=3, b=4;
    printf("%d + %d = %d\n",a,b a+b);
    

    version 1.1

    char *numWord[] = { "one", "two", "three", "four"};
    int a=3; b=4;
    printf("%s + %s = %d\n", numWord[a], numWord[b], a+b);
    

    small changes can have massive impacts.

    Something about butterflies flapping their wings and tsunamis...

  • Asdg (unregistered) in reply to QJ
    QJ:
    frits:
    The 100% code coverage metric is so dumb. Have these people never heard of permutation? I assume that 100% of the code is tested at least informally. I mean do some developers really write code that they've never run before releasing it?

    In some of the places I've worked I've had to opine that it would be nice if the developers checked whether their code actually compiles in the first place.

    Nagesh is on holidays. When did you work in India? Failing to compile is a bug that should be picked up by testers well before deployment into production. Good quality programmers are expensive and shouldn't waste time on such trivialities as compiling code.

    On a more serious note, I have been in situations where this appears to be the case, but it is simply that not everything that was changed was committed. In particular, I've found header files missing. Also, an individual component may build fine, but issues may arise when a full build is attempted depending on files mutually used (i remember one example where a library had been changed, and the signature of some methods had changed as a result. This meant that other components that relied on that same library wouldn't build under a full release. Admittedly, the libraries are shared in wierd ways, and there should be mechanisms to minimise the chance of this happening and a load of other things, but we don't always pick the codebases we inherit. His deployment went through fine, but when subsequent changes had to be made, it took a while to realise that the library had been destroyed by a previous change) or something.

  • (cs) in reply to Coyne
    Coyne:
    One of the processes was the daily account apply. You entered incoming donations in a batch; the apply process would then read the batch, update the accounts and delete the batch.

    On the disaster day in question, the accounts table reached size limit part way (early on) through the processing of the batch and, since the developers ignored such mundane messages from the database as "the table is full and I can't insert this now", the process blythely continued on.

    Then it deleted the batch.

    No fallback. No way to recover the batch and so an entire day of entry by the user was lost.

    Reminds me of a friend's early attempt at a message board in PHP. They weren't familiar with databases, so they stored everything in flat files. All user information was stored in a file (one line per user, much like CSV except with some control character instead of commas); all messages were stored in one giant file again separated by some control character.

    When they went to implement editing posts, they realized you can't really insert things into the middle of a file (if the new post is longer than the old one), so they implemented it as:

    1. Create a new posts file
    2. Copy all of the posts before the one being edited into the new file
    3. Write the edited post into the file just as a new post would bew written
    4. Copy all of the posts after it into the new file
    5. Replace the original file with the new file. This worked well enough; it was of course slow but it was a small enough forum that it wasn't really an issue. Until eventually that file grew to be quite a few megabytes - and the server started to run low on disk space. Traffic was low enough that the posts file itself didn't grow to fill the disk - but then someone tried to edit a post. After copying just a few posts into the new file, it ran out of space. It then proceeded to blindly ignore the fact that the following writes were failing, until it had finished "copying" all of the posts into the new file - then deleted the original file and replaced it with the now much smaller "updated" copy. Whoops.

    As a fun side note, they also didn't think to validate their inputs, so you could become an administrator just by putting the control character used for field separation in your username followed by a 1. You'd have a post count of "January" and other such nonsense, but oh well.

  • Yais isn't it, wot? (unregistered) in reply to MadJo (professional software tester)
    MadJo (professional software tester):

    And you NEVER let a developer test his own stuff, because it'll be hard get a grip on the quality in that case. No one is truly really critical on his or her own creations. "Oh, I know what I mean there. I'll just leave it in." (also testers shouldn't review their own testcases, let someone else do that)

    Agree 100% that developers should never test their own stuff. Aside from all else, Developers have in mind the problem that they are trying to fix, and will most likely only test for whether that problem is fixed (not necessarily whether other cases still work). Developers (especially support developers working on bug fixes) tend to have a closed world view, where all they see is the problem they are addressing. This, coupled with the fact that most Devs back themselves (this is normal - it is very difficult to get anything done if you doubt your own work) and are likely to under-test and make assumptions about their change working as intended, makes testing by developers only a touch dangerous (that said, I would expect any reasonable developer to have at least performed basic sanity checking to make sure that the change appears to have made the difference expected).

  • Yais isn't it, wot? (unregistered) in reply to Sanity
    Sanity:
    <some snipping> I agree that the real goals should be, well, the real goals. If the choice is between 100% test coverage and being able to deploy to production without breaking things, that's a no-brainer -- tests are a tool to make your program work, but clearly the goal is to make your program work. I just wanted to point out why, if you do suddenly find a really serious defect (like not being able to deploy to production), generally, the more comprehensive your test suite, the better off you are. <some more snipping>

    I understand that Father Cronin doesn't need five nines for his database, and maybe we don't have to care about Unicode support, but there's a difference between a tricky-but-obscure defect that's not worth fixing and having some pride in your craft.

    I'll assume you just worded things a little strangely in your haste....tests are not a tool to make your program work, but rather a tool that highlights items in your program that do not work. An analogy (though not a very good one necessarily) might be the gauges on the dash in a car are not tools that help your car work, but rather tools that help you find issues with your car.

    As for FR Cronin, while I agree with your point about pride, I thought Alex's point was that the criticality of an issue is based on a number of factors, and that an inconvenience to someone who is running a very non-critical system is not a high priority (especially if they have been engaged at a discount rate).

  • (cs) in reply to Mr Frost
    Mr Frost:
    I don't understand how testing would have had any impact in identifying a bug that was created post-deployment (the fridge was scratched in transit after it had been built).

    It's a stretched analogy to begin with, but the idea here is more that the supply chain has a defect. In theory, LG could have invested more to fix this defect, but they instead accepted the loss of profit.

  • Grandma Nazzi (unregistered) in reply to Clockwork Orange
    Clockwork Orange:
    Alex's Soapbox was what made The Daily WTF a keeper. Sadly, we don't get enough of these inciteful articles.

    Keep 'em coming, Alex?

    FTFY

  • jeo (unregistered) in reply to I Push Pixels
    I Push Pixels:
    Depending on where you are, it may be the organization itself that prevents the testers from acting like they know what they're doing.

    As a former tester, I've been on both sides of the equation, and I have been in situations where the lead tester demanded that we stick to the test script, icebergs (and crashes!) be damned, and as a coder, I have talked to the testers and found out, surprise, surprise, many of them have a pretty good idea of what's happening under the hood (even if only from a layperson's perspective), but alas, they've been instructed, on pain of death, to stick to the script and keep their bug reports non-technical.

    I came from the game industry, which is frequently a massive worse-than-failure unto itself and goes out of its way to perpetuate animosity between the engineers and the testing staff, so things may be different.

    Worse still, it seems to promote an attitude of 'fix (or remove) the offending test script', rather than lets fix the actual problem. I have worked in a place where it was acknowledged that some of the test cases appeared to fail, but this was related to bad coding in the test case rather than bad coding in the application. I never really understood why we couldn't simply remove the test cases we knew to be broken (or change their expected result if we were happy that they failed).

  • John Muller (unregistered)

    Some code coverage tools actually check the possible paths through functions/methods, not just lines/statements.

  • (cs) in reply to Ryan
    Ryan:
    Coyne:
    Okay, so now let's create a fallback. That's hard, right? No, in this case actually it isn't: The solution is to back up the entire database before running the apply process. Every single time a batch is to be applied! That way, if something goes wrong, you fix the problem, restore, rerun and everything is cool.

    Or, you know, use a database transaction to update the accounts. Rollback is much easier than restoring a database from back up. Though if your developers insist on ignoring errors, you should probably do the backup too.

    Remember that in this case, the developers did not check for errors: No detection = blindly commit = the error becomes permanent.

    And therein lies the hitch, because even if the developers did normally check for errors, the check might have been accidentally omitted for the critical INSERT statement. We call those bugs. And then if no one actually tested to see what happened if the INSERT failed well then no detection = blindly commit = the error becomes permanent (just as before).

    Ryan:
    Fault tolerance (the design principle that encompasses recoverability) definitely doesn't get the attention it deserves in most software development. Fault-tolerant operating systems are pretty popular in embedded systems (no joke) to protect applications from each other and restart things gracefully when something does crash. The closest thing we get on the desktop is "sorry about your luck; would you like to report this crash?" and the occasional document or browser tab recovery.

    I'm not sure I agree that recoverability lies entirely within "fault tolerance". Fault tolerance tends to focus on the idea that we work around, set aside, or ignore problems that occur. It tends to ignore the question of what comes after tolerance; that is, "We've tolerated the error, now how does it get corrected?"

    My focus is hospital business (as opposed to something neat like rocketry). Suppose we make an error in processing an account: The result is that the payer refuses to pay. Generally we can resubmit...but to be able to resubmit we must be able to correct the error. Somewhere within the paper or electronic medical record, we must have enough information to reconstruct what happened and come up with a correct result; the alternative is a write-off (lost $). As processing shifts away from paper toward electronic, it becomes more important for all the original inputs to be retained: Otherwise we may be helpless to figure out exactly what went wrong.

    Reconsider the "donations batch" story: I guess you could say that was the ultimate fault tolerant system (since all faults were ignored).The issue was really the deletion of the batch: A fundamental design flaw in my view (whether faults are ignored or not). With the batch gone, no matter what went wrong, it can't be fixed.

    There's lots of ways to keep the batch, but if you don't and there was an error of any kind you are helpless to recover. Current thinking on fault tolerance doesn't seem to deal completely with this area of recoverability.

  • J.D. (unregistered)

    The article has a point, but what to do, when everything is done almost exactly the opposite way? Minimal or no testing at all, or in the best cases, the code is written and then committed to the repository, never having even been compiled. This is usually the problem with some individuals, as it was implied in the article, that you should have some pride in your work. If you constantly skip every problem or defect you come across by saying "ain't my job", then the problem is you.

    Unfortunately I have had the "pleasure" of working with these kind of guys among the years. Perhaps my most infuriating, and at the same time funniest, comment was "do it fast now, you can fix it later". And that was on the debate of whether I will write bad code or not. Writing bad code on purpose is the worst thing about these people. The same time it takes to think what you're doing and do it as right as possible on the first run is most probably faster than writing the bad code first, then again, then fix it and then get someone else to fix it, because you forgot, what it was supposed to do in the beginning.

    I've also done several programs that do tests, for code and for hardware. Usually the problem with testing software in general, at least in our company, is that the guys who write the documentation, aren't software designers, and usually the information, especially with hardware testers, is the view-point special documents, that tell nothing to the developer of what they want, what the product should be doing, and how it should be tested.

    Perhaps there should be some kind of tests for the people, who are being hired to do a certain job. Oh, but hey, interviews are considered Acceptance testing. But what about the other four points of testing? Haven't seen those implied on the interviewing stage. Perhaps they should reconsider....

  • (cs) in reply to Asdg
    Asdg:
    QJ:
    frits:
    The 100% code coverage metric is so dumb. Have these people never heard of permutation? I assume that 100% of the code is tested at least informally. I mean do some developers really write code that they've never run before releasing it?

    In some of the places I've worked I've had to opine that it would be nice if the developers checked whether their code actually compiles in the first place.

    Nagesh is on holidays. When did you work in India? Failing to compile is a bug that should be picked up by testers well before deployment into production. Good quality programmers are expensive and shouldn't waste time on such trivialities as compiling code.

    On a more serious note, I have been in situations where this appears to be the case, but it is simply that not everything that was changed was committed. In particular, I've found header files missing. Also, an individual component may build fine, but issues may arise when a full build is attempted depending on files mutually used (i remember one example where a library had been changed, and the signature of some methods had changed as a result. This meant that other components that relied on that same library wouldn't build under a full release. Admittedly, the libraries are shared in wierd ways, and there should be mechanisms to minimise the chance of this happening and a load of other things, but we don't always pick the codebases we inherit. His deployment went through fine, but when subsequent changes had to be made, it took a while to realise that the library had been destroyed by a previous change) or something.

    Yes, those things happen, no doubt about it ("Who hasn't done this?" or whatever the catchphrase is), and it's something that regular scheduled builds (using Cruise Control or Hudson or something sweet like that) usually catch adequately. As those tools do with those knuckle-draggers who haven't checked to see whether it compiles. Unfortunately, because such tools stop the code in its tracks before it can get as far as Test, the perceived seriousness of this perpetration is smaller than perhaps it ought to be. ("What, Bubba checked in a java class with undeclared variables again? Cruise Control caught it. Never mind, we fixed it for him ...")

  • (cs) in reply to Alex Papadimoulis
    Alex Papadimoulis:
    Mr Frost:
    I don't understand how testing would have had any impact in identifying a bug that was created post-deployment (the fridge was scratched in transit after it had been built).

    It's a stretched analogy to begin with, but the idea here is more that the supply chain has a defect. In theory, LG could have invested more to fix this defect, but they instead accepted the loss of profit.

    They might have done both. "Gahd dammit, Wolverine, that's your last screw-up! Go and get a job as a barber or something! Hmm ... reckon someone would buy this thing cheap? It's only scratched ..."

  • EPA (unregistered)

    TRWTF are American refrigerators, obviously. Everything's always bigger in the US...

  • Ernold (unregistered)
    When I explain all of this to that enthusiastic developer, the response is sometimes along the lines of, “but that’s not my job, so who cares?”

    I rarely see that attitude. More likely, "But as the developer, I'm least qualified to say whether it's production-ready". Which is true.

  • (cs) in reply to James Emerton
    James Emerton:
    Unit Testing is the process of testing a very specific bit of code. Proper unit testing involves mocking out database or file access and testing the operation of a specific class or function.

    Integration Testing tests the full (integrated!) stack of code, such as you might find in a normal runtime. Integration tests will write to the database or filesystem, and as such they are expected to take more time to run than unit tests.

    Unfortunately, there is much confusion surrounding this issue, even among developers. Perhaps this is due to the fact that testing frameworks often have the word "Unit" in their name, in spite of the fact they can be usually applied to any sort of automated testing.

    I came here expecting to find this comment. I was disappointed it din't appear until the second page of comments.

  • Ernold (unregistered) in reply to boog
    boog:
    C-Octothorpe:
    SIDENOTE: are there really people who WANT to test? I mean, speaking as a developer looking at the role of QA, it just doesn't seem all that appealing to me. I am not the smartest guy by far, but from my personal experience, QA tends to attract people with the intelligence levels reserved for the mildly retarded (I'd say in the neighbourhood of 100, give or take 10 pts).
    I should point out that in my experience, the few technically-inclined testers with whom I've worked were actually business analysts who also participated in requirements gathering and analysis, documentation, etc. In other words, the best testers seem to be more than just testers.

    Even better, they already had a good understanding of how the software was supposed to work when they went about their testing responsibilities.

    How about developers who WANT to test?

  • Magnus Persson (unregistered) in reply to too_many_usernames
    That's why there's a method of testing called "decision coverage" which is generally done in addition to code coverage.

    Decision coverage doesn't test every single input to a condition.

    What it does is test each possibility (true or false) for each decision, and shows that each input into that final "true or false" has an effect on the decision. Typically if you find an input to the Boolean expression has no effect on the Boolean it's an error of some sort. It also ensures that conditions which are supposed to be mutually exclusive are, in fact, exclusive.

    This way you test all outcomes rather than all possible ways to get that outcome. Of course, this testing is generally used in safety-critical applications where the main goal is safe operation, not some specific functionality; the goal of this testing is to ensure safe states due to (or at least gain a full awareness of the effects of) all decision paths.

    And then you also have path coverage, where every possible path through the decisions is counted.

    The above mentioned

    if (x <= 0)
       obj = NULL;
    if (x >= 0)
       obj.doSomething();
    

    can be given 100% line AND decision coverage with only two test cases (x<0 and x>0), without finding the null pointer exception. However, with full path coverage, you need to add a third test case (x:=0), exposing also the potential null pointer exception.

    The downside with path coverage is that it's harder to measure. In more complicated work, it may not at all be evident that the fourth possible path (both if statements evaluate to false) is never possible (e.g. if the value of x may be changed in the first execution). Hence path coverage is harder to measure than line or decision coverage.

    Then, as discussed previously in the thread, you may also have to discuss coverage of user inputs etc...

  • PurpleDog (unregistered) in reply to frits
    frits:
    The 100% code coverage metric is so dumb. Have these people never heard of permutation? I assume that 100% of the code is tested at least informally. I mean do some developers really write code that they've never run before releasing it?

    Oh yes... they most certainly do! I work with some of them.

  • Niels (unregistered)

    Interesting read... You should really check ISTQB or ASTQB

    http://www.istqb.org (or) http://www.astqb.org/

    Particulary the foundation glossary. It has some of the concepts you mention here completely chewed out for you + tons more. (Especially the test levels and the risk calculation).

    Regards, Niels (ISTQB CTAL TM)

  • Niels (unregistered) in reply to PurpleDog
    PurpleDog:
    frits:
    The 100% code coverage metric is so dumb. Have these people never heard of permutation? I assume that 100% of the code is tested at least informally. I mean do some developers really write code that they've never run before releasing it?

    Oh yes... they most certainly do! I work with some of them.

    I'm afraid I have to go with him.. Btw, code coverage only requires you to cover all code at least once. If you want all permutation, you're looking for decision coverage.

    Still, I think there are more intelligent ways to pick the only the most interesting cases to test.

  • (cs)

    To follow up on what Coyne said, here's another real-world example. For the past while I've been supporting and extending the "project from hell". The original developer quit unexpectedly within weeks of the project being "done", and as we began digging into the code it became apparent why he had done so: by quitting at that time, he could still give the illusion that the project worked and was on track, when in reality it was a steaming pile of excrement.

    The project is primarily written in Silverlight, with WCF services talking to a SQL Server backend through an Enterprise Library ORM. (Except for the EL, this is all pretty standard stuff. None of the rest of us had seen EL in active production use in 3 or 4 years.)

    This wasn't the developer's first Silverlight app, though it was his first "professional" one and the first one where he tried to use MVVM. However, he tried to stuff MVVM into his existing framework of knowledge instead of the other way around. So in some cases it's as far from MVVM as one can possibly be, while in others it kind of flirts with the edges of what one might consider to be MVVM-like.

    One of the developer's "innovations" was the use of an XML property bag in his objects for all fields that were not directly linked to the object's lifetime. Hey, great - now we don't have to adjust the database schema when something changes, amirite?

    Ultimately, a number of bugs around this system helped us to realize:

    • The "change tracking" (internal state management) in the objects was fragile and in most cases done wrong - for example, for some of the objects, merely loading one from the database was sufficient to trip the "I've changed" condition.
    • The lack of robust error checking - both for sanity checking and actual business logic - allowed the data to be corrupted easily. This ranged from partial corruption, where fields in the XML would not be updated correctly to complete corruption where the XML would be entirely wiped out or reset to a default state.
    • The security model prevented users from seeing any data besides their own. However, the underlying data was still silently downloaded AND PERSISTED by all users for all other users. Due to the fragility mentioned above, this meant that, if something went wrong on Alex's computer, it could wipe out my data for no apparent reason.

    Several weeks ago, for an unrelated item, we added a trigger-based audit trail on several key tables in the database. Basically, whenever any event occurs on a record, its values are stored as a snapshot in a separate table. However, this audit trail has now saved our butts twice when some kind of data corruption occurred and we were able to easily identify the last-known-good state of certain records and immediately roll them back to that state.

    The ultimate fix - sanity and business rule checks on both the client and server side and preventing users' client apps from uploading any data besides their own - is still in testing, targeting a production deployment later this week. Until then, we're babysitting the database using an audit trail that wasn't even designed for this specific purpose.

  • Lone Star Male (unregistered) in reply to EPA
    EPA:
    TRWTF are American Texan refrigerators, obviously. Everything's always bigger in the US Texas...
    FTFY...and when they say everything, they mean everything, ladies.
  • JadedTester (unregistered) in reply to Niels
    Niels :
    Interesting read... You should really check ISTQB or ASTQB

    http://www.istqb.org (or) http://www.astqb.org/

    Particulary the foundation glossary. It has some of the concepts you mention here completely chewed out for you + tons more. (Especially the test levels and the risk calculation).

    Regards, Niels (ISTQB CTAL TM)

    Because being a good tester is about being able to correctly answer 26/40 multiple choice questions.

  • Anonymous (unregistered) in reply to Lone Star Male
    Lone Star Male:
    EPA:
    TRWTF are American Texan refrigerators, obviously. Everything's always bigger in the US Texas...
    FTFY...and when they say everything, they mean everything, ladies.
    Yep, a perfect description of the Texan ego.
  • Lone Star Male (unregistered) in reply to Anonymous
    Anonymous:
    Lone Star Male:
    EPA:
    TRWTF are American Texan refrigerators, obviously. Everything's always bigger in the US Texas...
    FTFY...and when they say everything, they mean everything, ladies.
    Yep, a perfect description of the Texan ego.
    Well, it is hard to hide...if you know what I mean.
  • Frank (unregistered)

    Wow. This is the biggest WTF I've ever seen. A software tester who understands that 100% code coverage != quality! What's next? A software quality person who understands that more process won't always prevent the introduction of bugs? Or a manager who understands both? (OK, I know the manager thing is simply asking for too much.)

  • Machtyn (unregistered) in reply to нагеш
    нагеш:
    Machtyn:
    I would like to sub scribe to this the ory about never leaving the house to avoid getting hit by a bus.

    You still run the risk of an airplane crashing through your roof, or heck, a bus running through your front door. Maybe Nagesh and his taxi will let themselves in.

    Oh, come on... If akismet would have allowed me to post a very fine picture or the link of a school bus that had crashed through a house, I would have. My problem is I should have completed my critique stating as much. (Do a GIS for "bus crashes into house"... that got me results.)

    /CAPTCHA augue: what two people do when they have a disagweement.

  • Spurgeon General (unregistered) in reply to Lone Star Male
    Lone Star Male:
    Anonymous:
    Lone Star Male:
    EPA:
    TRWTF are American Texan refrigerators, obviously. Everything's always bigger in the US Texas...
    FTFY...and when they say everything, they mean everything, ladies.
    Yep, a perfect description of the Texan ego.
    Well, it is hard to hide...if you know what I mean.
    I had an uncle from Texas who was bit by a rattlesnake. After 3 days of intense pain, the rattlesnake died.
  • bob (unregistered)
    The foundation guy notices a problem with the plans, but says that the framer will fix it. The framer says that the drywaller will fix it, the drywaller says the finish carpenter will fix it, the finish carpenter says the painter will fix it, and the painter says “I sure hope the homeowner is blind and doesn’t see it.”

    The general contractor is the person you need to deal with, not the sub-contractors. The general contractor will be the one to make it fixed and he'll delegate.

    You can test all day but there will always be something wrong somewhere that is missed. Flawed logic statements or flaws in building materials always happen and you'll never see either until something happens.

  • (cs) in reply to Dehli Belly
    Dehli Belly:
    I would have thought testing requires all sorts of testing by all sorts of different parties to give the product the best chance at quality. Technical only testing doesn't work, because technical people aren't necessarily looking for results to mean the same thing. "Broken" doesn't mean the same thing either, nor any one of a number of descriptions for problems that occur. This site brings many opf them to light....
    That depends on the product and its potential users, but I'm not talking about "types" of testing anyway. Yeah, technical-only testing probably isn't enough; I don't think anyone said otherwise.

    I'm talking specifically about the testers' ability to clearly report failing tests and defects. How do you expect to efficiently deal with bugs if you don't have any details about them?

  • golddog (unregistered) in reply to C-Octothorpe
    C-Octothorpe:
    trtrwtf:
    anon#213:
    trtrwtf:
    anon#213:
    I thought the wtf was the fridge being too short. That small amount of useless space would drive me crazy.

    Your pretty easily driven crazy, aren't you?

    DAMMIT ITS "YOU'RE", NOT "YOUR"!!!!!!!

    Thanks, C-Octo. I owe you a beer. (I love to see them pop like that)

    Yeah, it usually starts with a slight twitch in one eye, and goes from there...

    Clouseau!!!!

  • Cue Hilarious Laughter (unregistered) in reply to Lone Star Male
    Lone Star Male:
    Anonymous:
    Lone Star Male:
    EPA:
    TRWTF are American Texan refrigerators, obviously. Everything's always bigger in the US Texas...
    FTFY...and when they say everything, they mean everything, ladies.
    Yep, a perfect description of the Texan ego.
    Well, it is hard to hide...if you know what I mean.
    This article is about tests. I think your thinking of testes.
  • C-Octothorpe (unregistered) in reply to Cue Hilarious Laughter
    Cue Hilarious Laughter:
    Lone Star Male:
    Anonymous:
    Lone Star Male:
    EPA:
    TRWTF are American Texan refrigerators, obviously. Everything's always bigger in the US Texas...
    FTFY...and when they say everything, they mean everything, ladies.
    Yep, a perfect description of the Texan ego.
    Well, it is hard to hide...if you know what I mean.
    This article is about tests. I think your thinking of testes.

    There's a difference?

  • anon (unregistered) in reply to frits

    Yes. Knuth would be an obvious example, as in at least one of his correspondance he notes "Beware of bugs in the above code; I have only proved it correct, not tried it."

    Also, the amount of people who turn "i=i+1;" statements into "i++;" statements; run automagic code formatters w/o testing; and just plain old 'it runs once, so every line in this file is automagically good' kind of people

  • boog (unregistered)

    The WTF is my damn penguin, I'm pretty sure I'd strangle the where?

  • callcopse (unregistered)

    TRWTF is clearly insecure septic tanks trying to advertise their purportedly outsized genitalia in programming blog comments. Who's that for the benefit of then?

Leave a comment on “Testing Done Right”

Log In or post as a guest

Replying to comment #:

« Return to Article