- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
Oh my. frits?
Anyway, while I agree that 100% code coverage is meaningless when test defects exist, is it simply a gestalt "feeling" about when your code is good to go, or what?
Admin
Someone needs to explain that last bit to my boss. Badly.
Admin
Damn, that's a nice fridge. Anyway, I agree with the sentiment and I always unit test functionality that cannot be adequately exercised by an end user. But everything else I leave to the chimps in the testing department. Why make senior developers do the work of a trained monkey? There's a reason we get paid twice as much as them, you know.
Admin
So Plato spoke latin huh? It was Juvenal.
Admin
Just checking whether I also need to type in " " to get a linebreak in my comment.
Admin
"At best (i.e., with unlimited resources), you can be 99.999…% confident that there will be no defects in production."
So, 100% confident then.
Admin
One real WTF is the number of developers (yes, 341777, I'm looking at you) who still think unit testing is a testing technique, rather than a programming one.
Admin
Admin
No, you're juvenile!
Admin
Even if you have coverage of 100% of the lines of code that doesn't mean you have covered 100% of the code paths. In fact, since the number of code paths through the entire code base grows exponentially you have covered some vanishingly small percentage of the code paths. For example, it is pretty easy to get 100% coverage of the following lines without uncovering the null pointer exception
if (x <= 0) { obj = NULL; } if (x >= 0) { obj.doSomething(); }
Admin
I have a problem with this:
What does "the program function" means ? Doesn't crash ? That is easy. But what if just behave differently from expected ? Than, what is expected ? What is expected is define in the functional specifications, so what is the real difference between integration and functional testing ?
My personal definition, that i am using in my work in a big IT departement, is that integration test verify that a codebase correspond to a technical design, i.e. that the different modules interacts as the architect/developer decided, while the functional tests verify that the design and the code actually correspond to the functional requirements.
Opinions ?
Admin
As a QA Monkey I should be offended, but then I realized that programmers use all their salary to buy dates while I use mine for bananas.
Admin
Admin
Admin
Why explain it to him badly?
Admin
With trained monkeys, you only get phone calls saying "program no work; fix it". The flying poo can become a bit of a problem too.
Admin
An interesting definition of Acceptance Testing ("formal or informal testing to ensure that functional requirements as implemented are valid and meet the business need").
Where I come from, Acceptance Testing verifies that the delivered product meets the contractually agreed requirements. Verification that these requirements meet the business need is a separate matter.
Also interesting is the designation of functional and non-functional requirements - a concept that seems to come from a strongly data-processing oriented background, rather than, say, a system-control oriented background, in which performance is often a critical functional requirement.
Admin
Testing is such a subjective area of programming, that it means different things to different people/businesses/departments. The question, "Does the program function?" could mean "Does it crash?", or "Does it do XYZ, ZYX, or some other functionality?"
Before you test, you first have to define your tests. But, to define your tests, you have to define what is a reasonable outcome for the test you want to write. Is there more than one way to access a piece of code? Does it produce more than one type of output - or any output, for that matter?
Testing is almost like software design; you have to sit down and plan out what you want to test, and how you're going to implement it. Which, unfortunately, to most business types is a 100% (or 99.(9)%) waste of time and effort.
Admin
I'm throwing a WtfNotFunny exception.
Admin
But I'm happy to work at a computer that distinguishes between devs and testers; we certainly are responsible for testing the effects of our own code before checking in changes (that would be your #1), but the "test team" consists of people that were hired for that purpose. I sort of thought it was like that everywhere (well, everywhere where the whole company isn't a couple devs.)
Admin
The 100% code coverage metric is so dumb. Have these people never heard of permutation? I assume that 100% of the code is tested at least informally. I mean do some developers really write code that they've never run before releasing it?
Admin
Shame about the furniture.
And the clouds on the ceiling.
Admin
Yes. Unfortunately, the amount of testing a developer does tends to have a negative correlation with the amount of testing their code actually needs. There's usually no clearer sign of a developer who belongs on the dole queue than one who thinks their code will work first time.
Admin
Admin
Bazzzzing!
Admin
Pay peanuts, etc. Sack your team, and hire people who actually want to test, have technical skills and take it seriously as both a career and a technical field. Works for Microsoft.
Admin
Admin
I don't think that it's dumb. I think that it is misunderstood. 100% code coverage, to me, means that every class/function/method has a corresponding test. The real questions are: Are the tests sufficient? Are the tests meaningful?
Having 100% code coverage is a worthy goal to attempt to achieve, so long as you don't just try to write tests for the sake of writing tests. Can you possibly cover every single permutation of error that could possibly occur? No, and you shouldn't necessarily try (it goes back to what Alex said about the cost/effort going up as you get closer to 100%).
As an example, if I wrote a class that had three methods in it, then to me it is reasonable that to achieve 100% code coverage I would need three unit tests. If I add a new method, and I add a new unit test, then I'm still at 100% code coverage.
Do these tests test for every single possible outcome? Probably not. Should I care? Yes. Will I write a test for every single possible outcome? Hell no.
To me there are two types of development: test-driven development, where I write tests [theoretically] before I write code - this is a practice that helps to shape the program, while simultaneously giving me code coverage AND documentation; and, exception-driven development, where tests are written to reproduce bugs as they are found. This helps to document known issues, reproduce said issues, and provide future test vectors to insure that future changes don't re-introduce already fixed bugs.
Admin
A few years ago my old company off-shored all their testing to India. Not the development, just testing - so the off-shore team were writing and running scripts for a black-box that they didn't even begin to understand. Three bug-ridden releases later and... well, let's just say that company isn't around anymore.
Admin
I agree testing designed to reach some level of quality that will never be 100%.
Where I often see a lack in software is what I would call recoverability: The ability to correct a problem after it has occurred.
I'm generally a cautious individual. Experience has demonstrated that I should always have a fallback plan, which is really a chain of defenses thing.
If the application has no recoverability, then there is no fallback: The only thing between "everything is good" and "absolute disaster" is a fence called "everything works perfectly". When designing applications, one of the things that should always be done is to consider, "What if this doesn't work? What would be our fallback plan?"
Because, if you don't think about that and plan for that, then one day something doesn't work perfectly and you find yourself in absolute disaster land because you have no other line of defense. That is actually the source of some really good stories (in here as well as in other places). I'll relate one:
Okay, so now let's create a fallback. That's hard, right? No, in this case actually it isn't: The solution is to back up the entire database before running the apply process. Every single time a batch is to be applied! That way, if something goes wrong, you fix the problem, restore, rerun and everything is cool.
...and usually, fallback is just like that. It mostly consists of one single element that I often see omitted: Keep the input state so that rerun is possible. There are "bazillions" of ways to do that; take your own pick.
But some people like to live on the edge and depend on the application doing everything right, and when it doesn't, well, glad I'm not them.
Admin
Admin
Admin
Admin
If a seemingly simple change can bring down the entire system, there's something fundamentally wrong with that system and it's ripe for a bit of guerrilla refactoring.
Admin
Waste of time explaining it to him well. Pearls before swine.
Admin
I would like to sub scribe to this the ory about never leaving the house to avoid getting hit by a bus.
Admin
Admin
In some of the places I've worked I've had to opine that it would be nice if the developers checked whether their code actually compiles in the first place.
Admin
... and goggles. Some of them have sniper aim...
SIDENOTE: are there really people who WANT to test? I mean, speaking as a developer looking at the role of QA, it just doesn't seem all that appealing to me. I am not the smartest guy by far, but from my personal experience, QA tends to attract people with the intelligence levels reserved for the mildly retarded (I'd say in the neighbourhood of 100, give or take 10 pts). This usually results in people three kinds of QA monkeys:
The new grad/career switch above-average smart person trying to get into software development. Some people take this path, which is fine, and in some corps a necessary step...
The shitty, change resistant x-developer who couldn't hash it anymore and (in)voluntarily goes into QA. This is more rare as pride usually takes over and they just quit and f*ck up another dev shop with their presence.
The mildly retarded sociopaths who don't have the intelligence or social skills necessary to make it in any other role IT related. They may be able to make something of themselves in another profession, but because they got mad skillz at Halo, they MUST be good at everything IT related, which keeps them in the QA gig.
I realize this sounds very elitist, which isn't my intention but this is just what I have noticed/pondered...
Admin
Admin
There's a difference between "all possible code paths" and "all possible inputs".
Pedantic digression: The number of possible inputs is not infinite. There are always limits on the maximum size of an integer or lenght of a string, etc. So while the number of possible inputs to most programs is huge, it is not infinite.
Suppose I write (I'll use Java just because that's what I write the most, all due apologies to the VB and PHP folks):
There are an extremely large number of possible inputs. But there are only three obvious code paths: empty string, string ending with period, and "other". There are at least two other not-quite-obvious code paths: s==null and s.length==maximum length for a string. Maybe there are some other special cases that would cause trouble. But for a test of this function to be thorough, we would not need to try "a", "b", "c", ... "z", "aa", "ab", ... etc for billions and billions of possible values.
That said, where we regularly get burned on testing is when we don't consider some case that turns out to create a not-so-obvious code path. Like, our code screws up when the customer is named "Null" or crazy things like that.
Admin
Admin
If testing is limited to "Click x. See if actual result matches expected result" then no, I doubt many people do. On the other hand, if it's "Build an automated framework for this system, then go to the architecture committee meeting to talk about testability. After that, meet with the product manager and BA to distill their requirements into Cucumber and then spend the afternoon on exploratory testing trying to break the new system" then why not? Done properly, QA is as much of an intellectual challenge and just as rewarding as development. The problem is that most testers are crap, and most test managers are worse, so they end up with the former definition of testing.
Admin
I don't think you understand what functional and non-functional means in the context of software requirements. The fact that it is critical does not mean it becomes a functional requirement, it means it has to be met for the system to work correctly.
Also some companies manufacture and sell software themselves! i.e. you can't just say; "that's what it says in the requirements". Not all software development is outsourced. For example a computer game has to go through "Acceptance Testing" with the people who are likely to use it e.g. so called "beta testing".
Admin
Fortunately (at least in most cases) when it comes to the space program, just because it meets the contract requirements isn't enough, especially when it comes to man-rated spacecraft and space probes that'll be working for a decade or two (for example, the Cassini Saturn probe and the MESSENGER Mercury probe.) "Just good enough" doesn't cut it in situations like that, so the testing that is done is to ensure that things work, they work right and that they'll continue working in the future.
Then again, those probes undergo the most intense real-world testing/use that anyone could envision. But that's after delivery/launch. chuckle
Admin
If there is a trick to decent QA testing, its to Choose to do it in the first place. Choose to restrict code submits to only those being accompanied by functional unit tests. Choose to use a system of shared peer review for said code submits. Choose to have a continuous and scheduled build of your product, and to insist that fixing a broken build takes priority. Choose to have representatives from the coding teams participate in the final QA process. Choose to get signoff from all teams before releasing. Choose to have feature freezes, periodically-reviewed API documentation and believable deadlines. Choose a risk free release cycle. Choose QA for everyone (especially the managers).
Captcha was of no consequat.
Admin
Admin
sadly, yes
Admin
But if the code has very few tests, or if you're doing integration tests across a whole mess of dependencies, testing is a real drag; huge amounts of work for little reward.
Admin