- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
One way you can tell that "Quis custodiet ipsos custodes?" is my quote and not Plato's is that I, being Roman, spoke Latin, whereas Plato, being Greek, spoke Greek.
In future please attribute my quotations correctly. Thank you.
Admin
Yeah, which means that for those of us who are QA Devs, we get a manager who has no idea how to manage devs (or, one suspects, manage anything; he's barely coping at his job). I guess you herd drooling morons around by getting a bigger drooling moron to create rivers of drool which they follow.
Admin
The Sixties called. They want their clock back.
Admin
I have a minor comment about who is responsible for deciding the level of quality vs qantity. The customer doesn't always have the knowledge to decide the level of quality, or rather understand the implications of errors. Or he/she needs to describe it in a general form that the developers can understand and work from.
Saying you have small error in the compressor of a freezer is not enough, you need to know if the problem might cause the freezer to stop working and spill fluids on the floor which might be a big deal for the customer or if it only might cause the freezer to hold a somewhat higher temperature than specified.
Clients usually wants a cheap system but with high quality. They might ask if you have tested the system and it worked, and as you wrote testing can be many things. This is where I think BDD might help, the customer can read some tests and see if the tests reflects the wanted quality. Things not tested (caused by removing tests to save time or not writing them at all) might break/not work as expected.
Admin
One problem I consistently see when people write about testing is the assumption that functional test are test scripts which are then performed by a tester manually. The assumption is often that only unit tests are automated.
For many (probably most) software this is true and valid. However, if a piece of software is expected to live and be updated for years, automating the feature tests may be necessary.
For a large application which has to ship in multiple languages, run on multiple platforms and undergo regular updates the cost of manually testing the app over time can far outstrip the cost of automating the feature tests.
Admin
What, nobody picked up that "quis custodiet ipsos custodes" was Juvenal, not Plato?
Admin
Oh yes, I see Juvenal himself did :-D
Admin
"So while the number of possible inputs to most programs is huge, it is not infinite."
Your example is rather bland. How about throwing a 40 or so character RegEx into the mix and coming up with all the inputs "proving" it works not matter what?
You can describe all the 'valids' but never all the invalids, and that is the essential problem.
Admin
"as we all should know, 0.999... = 1"
Its starting to become accepted that that is not true.
If 1/infinity==0 then it is possible to divide a small enough number and end up with nothing.
I am on the side that 1/infinity==0.000..<infinity>..0001
so that 0.999....+(1/infinity)==1
seems obvious to me that the series 9/10+9/100+9/1000... will never give 1 by definition. Never.
Admin
While it's not a common thing, they actually do that sometimes: http://en.wikipedia.org/wiki/Spirit_rover#Stuck_in_dusty_soil_with_poor_cohesion
Admin
Interesting article. One area of testing I'm interested in is absense testing- i.e. if the program is supposed to identify something, such as the number of widgets with defects, or in banking to return the exact transactions that are fraudlent, is it returning all of them or only a portion of them? Testing the output does not account for items not on the output.