- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
Dear Darth Kaner,
In case you can’t tell, this is a grown-up place. The fact that you insist on using your ridiculous handle clearly shows that you’re too young and too stupid to be commenting on TDWTF.
Go away and grow up.
Sincerely, Bert Glanstron
Admin
Yeah, it usually starts with a slight twitch in one eye, and goes from there...
Admin
from the last two years of maintaining a background-application (converting edifact messages), I'd suggest two main features:
I don't care for speed, don't mind a clumsy ui, won't be happy with a strict verification of the standard (most partners have their 'specials' anyway). If then I can nicely configure the converter, I'm happy. Even with some bugs.
Admin
Exploratory testing implies more in depth knowledge of the system than I normally had, only after a month or so I could start doing exploratory testing, well inside the second iteration. By that time, I'd have had to rewrite hundreds of these testcases.
BDD, yes, as a matter of fact, this IS the first time I heard of it. And after reading up on it, I could not have applied it to the project I was working with.
Checklists are all fine and dandy, but when you have a system interface test to do, with hundreds of variables, where the interface specs change by the day. You can't keep up with checklists alone. Especially if they demand a full test every god damn release. (And sadly I was at the bottom of the food chain)
Admin
Or, you know, use a database transaction to update the accounts. Rollback is much easier than restoring a database from back up. Though if your developers insist on ignoring errors, you should probably do the backup too.
Fault tolerance (the design principle that encompasses recoverability) definitely doesn't get the attention it deserves in most software development. Fault-tolerant operating systems are pretty popular in embedded systems (no joke) to protect applications from each other and restart things gracefully when something does crash. The closest thing we get on the desktop is "sorry about your luck; would you like to report this crash?" and the occasional document or browser tab recovery.
Admin
But in the second example, you go from 5 hours to 50 hours (50/5= 10 times as much), and from 60% to 80% (80/60= 1.333 times as much). That's a 1.333/10= 0.133 goodness efficiency you get from the second jump. The second jump is better than the first jump, and the point you wanted to make through your example was that the jumps would get progressively worse.
You want this to come out True, where 'p' is percentage, and 't' is time:
Admin
Admin
Admin
I don't understand how testing would have had any impact in identifying a bug that was created post-deployment (the fridge was scratched in transit after it had been built).
There was a bug in my computer - it stopped working after I dropped it. I think this is a result of the windows operating system not having been sufficiently tested for this scenario. Maybe I don't understand testing.
Admin
I think that dude was a tester - he talked about large sums of money...
Admin
Surely integration testing tests that the program 'works' once all the components are integrated. That is, individual components have been tested (unit testing?) and are believed to work. They are then chucked together in one massive conglomeration. The Integration test basically looks to make sure that 'things' work - that is, we don't crash unexpectedly, any results we present appear valid (though not necessarily correct) etc.
Functional testing tests that the application works as expected. That the results it returns are correct (given the requirements) and that the application doesn't simply do something (which is what integration testing does) but that the application does what the requirements say that the application does.
Integration testing can be done (reasonably) blind. We don't deliberately test using real, or even realistic data, and we don't necessarily need an awareness of how the application will be used. We check that menu items do things (we don't care whether that's necessarily what they're meant to). Functional testing, on the other hand, would check that selecting a menu item does exactly what the requirements (or design, or every other document) indicates that it does what it should do.
There is a fine line, to a degree, insofar as selecting a 'Print' button probably has a certain expectation that a print dialogue is displayed (even in integration testing). Functional testing would require this dialogue to have a specific appearance, and possibly a specific functionality of its own....
Admin
Hmm....I think it's a 'horses for courses' thing. I think some testing by highly technical people is required (it should be noted that some testers are highly technical people). I think some testing should be done by people with average technical skill (ie follow the instruction type testing that a monkey should be able to do - if they find a problem, it is easy enough to walk through it with them to work out what [s]they messed up[/s] the issue is). Some testing then needs to be done with real monkeys. Preferably the tech savvy ones who are proficient in opening all manner of files by double clicking them, and have the expertise to inadvertently cripple any system you might allow them use, and have a very good memory (they have memorized word for word "I didn't do anything, but the computer.... and I don't remember what I did before"
I would have thought testing requires all sorts of testing by all sorts of different parties to give the product the best chance at quality. Technical only testing doesn't work, because technical people aren't necessarily looking for results to mean the same thing. "Broken" doesn't mean the same thing either, nor any one of a number of descriptions for problems that occur. This site brings many opf them to light....
Admin
Perhaps part of this problem stems from estimates at the planning stage that make the same assumption? It will work the first time it is written, if not, an extra 5 minutes should suffice for me to track down and repair any issues...
Admin
indeed: Consider original code
version 1.1
small changes can have massive impacts.
Something about butterflies flapping their wings and tsunamis...
Admin
Nagesh is on holidays. When did you work in India? Failing to compile is a bug that should be picked up by testers well before deployment into production. Good quality programmers are expensive and shouldn't waste time on such trivialities as compiling code.
On a more serious note, I have been in situations where this appears to be the case, but it is simply that not everything that was changed was committed. In particular, I've found header files missing. Also, an individual component may build fine, but issues may arise when a full build is attempted depending on files mutually used (i remember one example where a library had been changed, and the signature of some methods had changed as a result. This meant that other components that relied on that same library wouldn't build under a full release. Admittedly, the libraries are shared in wierd ways, and there should be mechanisms to minimise the chance of this happening and a load of other things, but we don't always pick the codebases we inherit. His deployment went through fine, but when subsequent changes had to be made, it took a while to realise that the library had been destroyed by a previous change) or something.
Admin
When they went to implement editing posts, they realized you can't really insert things into the middle of a file (if the new post is longer than the old one), so they implemented it as:
As a fun side note, they also didn't think to validate their inputs, so you could become an administrator just by putting the control character used for field separation in your username followed by a 1. You'd have a post count of "January" and other such nonsense, but oh well.
Admin
Agree 100% that developers should never test their own stuff. Aside from all else, Developers have in mind the problem that they are trying to fix, and will most likely only test for whether that problem is fixed (not necessarily whether other cases still work). Developers (especially support developers working on bug fixes) tend to have a closed world view, where all they see is the problem they are addressing. This, coupled with the fact that most Devs back themselves (this is normal - it is very difficult to get anything done if you doubt your own work) and are likely to under-test and make assumptions about their change working as intended, makes testing by developers only a touch dangerous (that said, I would expect any reasonable developer to have at least performed basic sanity checking to make sure that the change appears to have made the difference expected).
Admin
I'll assume you just worded things a little strangely in your haste....tests are not a tool to make your program work, but rather a tool that highlights items in your program that do not work. An analogy (though not a very good one necessarily) might be the gauges on the dash in a car are not tools that help your car work, but rather tools that help you find issues with your car.
As for FR Cronin, while I agree with your point about pride, I thought Alex's point was that the criticality of an issue is based on a number of factors, and that an inconvenience to someone who is running a very non-critical system is not a high priority (especially if they have been engaged at a discount rate).
Admin
It's a stretched analogy to begin with, but the idea here is more that the supply chain has a defect. In theory, LG could have invested more to fix this defect, but they instead accepted the loss of profit.
Admin
FTFY
Admin
Worse still, it seems to promote an attitude of 'fix (or remove) the offending test script', rather than lets fix the actual problem. I have worked in a place where it was acknowledged that some of the test cases appeared to fail, but this was related to bad coding in the test case rather than bad coding in the application. I never really understood why we couldn't simply remove the test cases we knew to be broken (or change their expected result if we were happy that they failed).
Admin
Some code coverage tools actually check the possible paths through functions/methods, not just lines/statements.
Admin
Remember that in this case, the developers did not check for errors: No detection = blindly commit = the error becomes permanent.
And therein lies the hitch, because even if the developers did normally check for errors, the check might have been accidentally omitted for the critical INSERT statement. We call those bugs. And then if no one actually tested to see what happened if the INSERT failed well then no detection = blindly commit = the error becomes permanent (just as before).
I'm not sure I agree that recoverability lies entirely within "fault tolerance". Fault tolerance tends to focus on the idea that we work around, set aside, or ignore problems that occur. It tends to ignore the question of what comes after tolerance; that is, "We've tolerated the error, now how does it get corrected?"
My focus is hospital business (as opposed to something neat like rocketry). Suppose we make an error in processing an account: The result is that the payer refuses to pay. Generally we can resubmit...but to be able to resubmit we must be able to correct the error. Somewhere within the paper or electronic medical record, we must have enough information to reconstruct what happened and come up with a correct result; the alternative is a write-off (lost $). As processing shifts away from paper toward electronic, it becomes more important for all the original inputs to be retained: Otherwise we may be helpless to figure out exactly what went wrong.
Reconsider the "donations batch" story: I guess you could say that was the ultimate fault tolerant system (since all faults were ignored).The issue was really the deletion of the batch: A fundamental design flaw in my view (whether faults are ignored or not). With the batch gone, no matter what went wrong, it can't be fixed.
There's lots of ways to keep the batch, but if you don't and there was an error of any kind you are helpless to recover. Current thinking on fault tolerance doesn't seem to deal completely with this area of recoverability.
Admin
The article has a point, but what to do, when everything is done almost exactly the opposite way? Minimal or no testing at all, or in the best cases, the code is written and then committed to the repository, never having even been compiled. This is usually the problem with some individuals, as it was implied in the article, that you should have some pride in your work. If you constantly skip every problem or defect you come across by saying "ain't my job", then the problem is you.
Unfortunately I have had the "pleasure" of working with these kind of guys among the years. Perhaps my most infuriating, and at the same time funniest, comment was "do it fast now, you can fix it later". And that was on the debate of whether I will write bad code or not. Writing bad code on purpose is the worst thing about these people. The same time it takes to think what you're doing and do it as right as possible on the first run is most probably faster than writing the bad code first, then again, then fix it and then get someone else to fix it, because you forgot, what it was supposed to do in the beginning.
I've also done several programs that do tests, for code and for hardware. Usually the problem with testing software in general, at least in our company, is that the guys who write the documentation, aren't software designers, and usually the information, especially with hardware testers, is the view-point special documents, that tell nothing to the developer of what they want, what the product should be doing, and how it should be tested.
Perhaps there should be some kind of tests for the people, who are being hired to do a certain job. Oh, but hey, interviews are considered Acceptance testing. But what about the other four points of testing? Haven't seen those implied on the interviewing stage. Perhaps they should reconsider....
Admin
Yes, those things happen, no doubt about it ("Who hasn't done this?" or whatever the catchphrase is), and it's something that regular scheduled builds (using Cruise Control or Hudson or something sweet like that) usually catch adequately. As those tools do with those knuckle-draggers who haven't checked to see whether it compiles. Unfortunately, because such tools stop the code in its tracks before it can get as far as Test, the perceived seriousness of this perpetration is smaller than perhaps it ought to be. ("What, Bubba checked in a java class with undeclared variables again? Cruise Control caught it. Never mind, we fixed it for him ...")
Admin
They might have done both. "Gahd dammit, Wolverine, that's your last screw-up! Go and get a job as a barber or something! Hmm ... reckon someone would buy this thing cheap? It's only scratched ..."
Admin
TRWTF are American refrigerators, obviously. Everything's always bigger in the US...
Admin
I rarely see that attitude. More likely, "But as the developer, I'm least qualified to say whether it's production-ready". Which is true.
Admin
I came here expecting to find this comment. I was disappointed it din't appear until the second page of comments.
Admin
How about developers who WANT to test?
Admin
And then you also have path coverage, where every possible path through the decisions is counted.
The above mentioned
can be given 100% line AND decision coverage with only two test cases (x<0 and x>0), without finding the null pointer exception. However, with full path coverage, you need to add a third test case (x:=0), exposing also the potential null pointer exception.
The downside with path coverage is that it's harder to measure. In more complicated work, it may not at all be evident that the fourth possible path (both if statements evaluate to false) is never possible (e.g. if the value of x may be changed in the first execution). Hence path coverage is harder to measure than line or decision coverage.
Then, as discussed previously in the thread, you may also have to discuss coverage of user inputs etc...
Admin
Oh yes... they most certainly do! I work with some of them.
Admin
Interesting read... You should really check ISTQB or ASTQB
http://www.istqb.org (or) http://www.astqb.org/
Particulary the foundation glossary. It has some of the concepts you mention here completely chewed out for you + tons more. (Especially the test levels and the risk calculation).
Regards, Niels (ISTQB CTAL TM)
Admin
Still, I think there are more intelligent ways to pick the only the most interesting cases to test.
Admin
To follow up on what Coyne said, here's another real-world example. For the past while I've been supporting and extending the "project from hell". The original developer quit unexpectedly within weeks of the project being "done", and as we began digging into the code it became apparent why he had done so: by quitting at that time, he could still give the illusion that the project worked and was on track, when in reality it was a steaming pile of excrement.
The project is primarily written in Silverlight, with WCF services talking to a SQL Server backend through an Enterprise Library ORM. (Except for the EL, this is all pretty standard stuff. None of the rest of us had seen EL in active production use in 3 or 4 years.)
This wasn't the developer's first Silverlight app, though it was his first "professional" one and the first one where he tried to use MVVM. However, he tried to stuff MVVM into his existing framework of knowledge instead of the other way around. So in some cases it's as far from MVVM as one can possibly be, while in others it kind of flirts with the edges of what one might consider to be MVVM-like.
One of the developer's "innovations" was the use of an XML property bag in his objects for all fields that were not directly linked to the object's lifetime. Hey, great - now we don't have to adjust the database schema when something changes, amirite?
Ultimately, a number of bugs around this system helped us to realize:
Several weeks ago, for an unrelated item, we added a trigger-based audit trail on several key tables in the database. Basically, whenever any event occurs on a record, its values are stored as a snapshot in a separate table. However, this audit trail has now saved our butts twice when some kind of data corruption occurred and we were able to easily identify the last-known-good state of certain records and immediately roll them back to that state.
The ultimate fix - sanity and business rule checks on both the client and server side and preventing users' client apps from uploading any data besides their own - is still in testing, targeting a production deployment later this week. Until then, we're babysitting the database using an audit trail that wasn't even designed for this specific purpose.
Admin
Admin
Because being a good tester is about being able to correctly answer 26/40 multiple choice questions.
Admin
Admin
Admin
Wow. This is the biggest WTF I've ever seen. A software tester who understands that 100% code coverage != quality! What's next? A software quality person who understands that more process won't always prevent the introduction of bugs? Or a manager who understands both? (OK, I know the manager thing is simply asking for too much.)
Admin
Oh, come on... If akismet would have allowed me to post a very fine picture or the link of a school bus that had crashed through a house, I would have. My problem is I should have completed my critique stating as much. (Do a GIS for "bus crashes into house"... that got me results.)
/CAPTCHA augue: what two people do when they have a disagweement.
Admin
Admin
The general contractor is the person you need to deal with, not the sub-contractors. The general contractor will be the one to make it fixed and he'll delegate.
You can test all day but there will always be something wrong somewhere that is missed. Flawed logic statements or flaws in building materials always happen and you'll never see either until something happens.
Admin
I'm talking specifically about the testers' ability to clearly report failing tests and defects. How do you expect to efficiently deal with bugs if you don't have any details about them?
Admin
Clouseau!!!!
Admin
Admin
There's a difference?
Admin
Yes. Knuth would be an obvious example, as in at least one of his correspondance he notes "Beware of bugs in the above code; I have only proved it correct, not tried it."
Also, the amount of people who turn "i=i+1;" statements into "i++;" statements; run automagic code formatters w/o testing; and just plain old 'it runs once, so every line in this file is automagically good' kind of people
Admin
The WTF is my damn penguin, I'm pretty sure I'd strangle the where?
Admin
TRWTF is clearly insecure septic tanks trying to advertise their purportedly outsized genitalia in programming blog comments. Who's that for the benefit of then?