- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
Maybe I'm some freak of nature; but I tend to enjoy things that other developers roll their eyes at. I have no preference between working on bugs in existing code vs. writing some new feature. From what I've seen, most people hate fixing bugs (particularly when it's code they didn't write). Likewise, I enjoy testing.
The problem is, it feels like my career options are greater as a software developer than a software tester. If I go to Dice or CareerBuilder or whatever, there are a lot of dev jobs I fit the profile for, but fewer QA type jobs. And the pay of the QA type jobs is lower (most of the time). And, with every year of additional development work I gain, the harder it seems to switch. An entry level QA job doesn't pay enough. The well paying QA job requires X years of experience in QA, and as a developer, I don't have that.
Admin
TRWTF is cockney rhyming slang.
Admin
Admin
That's the point I'm trying to make is that IF you can make more money and have greater career opportunities, why would you settle for QA (lower pay usually, dead-end job unless you go into mgmt)? I just leave it at "to each his/her own".
I know of a few people who went from doing development fulltime into IT security (white-hat hacking, etc.), which I can see, but I have yet to see a "good" developer (or person who can easily do development) go into QA.
Admin
... sorry, in response to what boog said a little while back.
Admin
I disagree. Unit testing is different from integration testing. Unit tests should be testing classes in isolation, and should not involve external resources (databases, file system, web services, etc.). At least for TDD, a primary purpose of unit tests is to drive design.
Permitting customers or managers to dictate low quality levels is unprofessional, in my opinion. Uncle Bob Martin has many presentations on this subject. Is it ethical for an electrician to do a shoddy job in a new housing development because the developers asked him to? To quote Kent Beck, "Quality is terrible as a control variable."
Admin
Admin
Admin
CAPTCHA: mara - is this the name of the girl friend?
Admin
You come here like crack addict and make excuse for others?
Admin
I'd just like to point out: it is still theoretically possible to get hit by a bus even if you never leave your house.
99.999%...
Admin
Admin
i would use it for space to store trays - you know the big ones that go on your lap to eat dinner when you are in front of the telly alone
Admin
Or is it you hitting the bus? :)
Anyhow my experience with TDD is that people spend ages faffing about with writing tests and not actually getting anywhere fast. I knock together something and get it working fast, then consider writing tests which aid in "and how could this be done better?".
After all in just about every other creative human endeavour you start out with rough sketches then see how this suits your needs (architect designing a new house, artist making a new character, etc). Then you increase the resolution and accuracy as you progress along (almost like zooming in on a fractal). You don't get the architect and some builders in the middle of a field and start building. In addition you are adding "mass" to the system which makes it more time consuming to go back and change fundamentals (re-factoring) when you could be more lean. (Funny how Agile types get rid of most/all of the spec/analysis docs then end up writing them in a "cool" new way).
In the future I'd like to see something like JTAG where you do not write separate code for tests but have probes/tripwires in the real code (even part of the language? ADA95?) and then have datasets which can setup that and check values that pass by; rather than having test code which can be obscure in itself.
My favourite issue with tests is that they do not really say if your product actually works, they just say for this very specific dataset this is what will happen in this environment. I have seen many a testing suite that results in 100% green lights but actually fails in real use, simply because the test data is not aligned with the real world data/usage! (very apparent for data driven apps).
Lastly people who are 100% code coverage fundamentalists seem to build code that has little fault tolerance in it [in my experience anyways]. I always try to make a program/function fail gracefully and clean up after itself. It's all very well that exception been thrown because it is the "correct" thing to do. Doesn't help your users at 2am in the morning if it all just disappears into a black hole!
Test in reality are like the Chinese proverb about a man and the time:
A man with one watch knows the time. A man with two watches is never quite sure.
How many people here change their tests/data to get things to pass? This is where a proper analysis/spec document explaining the why is required.
Ah well, let's wait and see what the next software development band wagon is!
Admin
So you don't like testing at least some of your own code ... hm. And don't like fixing bugs ... hm. Both of which are needed in any -worthwhile- s/w project (one that is sure to be a steaming pile of dung without either of the above).
Explain to me, why you are a developer, then?
Admin
Admin
I thought the defect was going to be that he bought a refrigerator too short. That gap at the top is unsightly.
Admin
The way it was expressed (from memory) suggested (to me at least) that having techos test would fix the problem because they could better articulate the problem.
<copout> That said, I wasn't necessarily disagreeing or arguing either, merely discussing </copout>Admin
Didn't Knuth the Polar Bear die recently?
Admin
Not sure if you are addressing that to me or not, but anyhow I am not saying testing is bad, just that a) understand the limits of it and b) don't sit there spending a day writing tests for code that does not exist and has no form yet.
Us it as an aid to make the code better designed, sure, use it as a starting point, well not so much.
I think the issue is that requirements are soft at first but writing tests first makes it concrete and absolute without the ability to play with the form first. For example:
"I want a car and door handles about waist height"
Code world: Test: Door handle at x,y,z coordinate. Code: Build code with door handle at x,y,z coordinate.
Show to user(s), "Yeah it is ok but a bit higher would be nice and more laid in"
Result: Lot of "production" level code and tests to change.
Real world: Make foam door quickly (hack and slash), stick block of foam as handle on it.
Show to user(s), Let them move it about.
Result: Cheap none production item which then can be realised later to production quality with hard set tests.
This is how most real things are made, inexpensive as possible first then once realised the form make to production quality with all quality tools to bear.
Interesting side point; Software via unit tests and IOC etc result in many more pieces of software with more interfaces and interactions. Real world production of items is always driving down the number of parts and interfaces as these are what make it more expensive/less reliable.
Perhaps software should be built twice, once shitty and quick, twice to production quality (and learning from mistakes). But hey who is going to pay for the second part? ;)
Admin
Nope, can't grasp the context. What is this "alone" you speak of?
Admin
The lesser-testers just send an email saying "it's not working", leaving you to figure out what that even means.
As was I, my communicative cohort. As was I.Addendum (2011-03-23 18:12): Also: Not that having smarter testers would "fix the problem", but rather that I prefer smarter testers over chimps.
Admin
I tend to agree that I am equally happy in support as development (although development where I had full control to dictate design and technology choice might be interesting). I had a manger once who insisted that all developers prefer to be in development roles (I was arguing that I was more than happy not to be rotated between support and development). He found that bizarre because he claimed that people like development because they get moire exposure ("Wow, Look...we actually released something"). Frankly, I suspect support staff get far more credit because they fix problems that already exist...That is, they stop problems that are already affecting users (<cynicism>not introducing new ones</cynicism>). Clients don't appreciate a good product until it has proven itself a good product. By this time, development teams are long forgotten....
Admin
Admin
What on earth are you talking about? Of course the developer has to test their own stuff. How else will they know they've actually done the job? Someone else should also test their stuff, certainly, but the developer has to be the first person to give it a thorough test.
One of the biggest scourges in this industry are developers who never test their work. "Oh, the unit tests / QA team will find any problems, so I don't need to bother". Too many times I've seen some idiot implement a repeater with paging controls and tell me it's done, and found that had they tried even ONCE to see if the paging worked, they would have realised that it didn't.
Developers need to be responsible for their work. That means they must test what they do thoroughly. By all means have additional testing to verify their work, but this business about never "letting" developers test is a complete load of horseshit. By the time the work gets to the testing team, bugs should be extremely hard to find or non-existent. That does not happen by accident.
Admin
I know many others have said similar things, but this seems to be becoming the bi-weekly WTF (maybe tri-weekly)....
Admin
Great post, while I agree on the point you're trying to make that at best (with unlimited resources) you can be 99.99 to some finite degree confident and never 100%, the math nerd in me wants to point out that:
99.9999... = 100
100/9 = 11.111... 9 * (100/9) = 9 * (11.111...) 900/9 = 99.999... 100 = 99.999... QED
Admin
Or, rather recently, from around where I live:
[image]BTW: This is not funny. Two people died in this accident.
Admin
Admin
Toy bus photo here!
Admin
That's an interesting soapbox. I'd love to hear some thoughts on that one. Especially on telling the customer not to have another useless expensive feature.
Admin
I met a few, and they don't really want to test everything, but they have a certain interest in proving that what other people did was wrong, and they came up with really intresting ways to rape our systems.
Another thing that has not really being discussed here is that testing is usually one of the first things (after documentation) that goes over board when hard deadlines have to be met. And sometimes the deadlines can be so hard that you throw everything over board and have the go-live as an integration test... that dead people is the unfortunate thing called "life outside the ivory tower".
Admin
BTW, if you do this, please post a link here. With the lack of articles lately, there's been nothing to do but turn to trolling.
Admin
Pretty much, lately it's been The Daily "WTF? No article again?"
Admin
Oh, see, you thought "W" was for "What". It's actually for "When."
Admin
I think you meant "dear people", unless you were talking about Therac-25 or something.
Admin
Alex, nice article...
But you're making it sound as if testing is only done to catch errors introduced by changes. As if the testers are supposed to ignore errors that slipped by during testing of the previous release. This of course is not intended or wanted.
Testing should ensure the quality of the upcoming release. Not just the changes from the last version to the upcoming version.
Suppose a tester finds a possible bug that has been in the codebase for ages. "won't fix" because it wasn't introduced with a change for the upcoming release? Absurd.
Admin
http://grammar.quickanddirtytips.com/biweekly-versus-semiweekly.aspx
Although, you get 10 pedant points for using the four dotted ellipsis at the end of a sentence....
Admin
I almost forgot about one of the most effective strategies for identifying defects. Have developers test each other's code. Make sure to match up developers that really dislike each other.
Admin
It's still early, but that's the saddest thing I've read all day. Don't you have a newspaper and a kitchen table?
Admin
Admin
The newspaper's a good idea to spread on the sofa to catch dribbles, but I'm not sure the kitchen table would fit on my lap.
Admin
Ok the real WTF in that is the picture of the Clock.. Wow thats ugly.
Admin
So, he's implying that the article is posted every two weeks not twice a week. Who cares? I think the point was that it seems a long time between drinks.
Shouldn't there be 3 dots not 4? Or was that your point
Admin
No offence intended, but in my experience the people who seem to be the biggest perfectionists about others' code are the ones that struggle to produce their own (I'm not saying this is necessarily the category you fit into, btw).
I have worked with many people who in code reviews would get pedantic about a miscellany of issues including misspellings in change descriptions, copyright needs updating, indenting slightly askew, insisting people identify where memory allocated was freed (which always made me think they couldn't find the spot themselves), insisting that pointers to freed memory be nulled, variable names etc. Although some of these things may be good practice, it seemed that these people would like to appear productive at code-reviews to make up for the fact that there was rarely any of their code being reviewed (and when there was, it would almost always either be copied from elsewhere, or actually done by someone else (or at least, any issues that came about in reviews of their code were dismissed as somehow being someone else's fault)).
Admin
Admin
Admin
The most robust system I every helped design and build (handling continuous large scale transfers of data between academic institutions) was robust because it assumed from the outset that every single operation would fail.
Actually, it wasn't so much an assumption as a statement of fact about the components we were building our system from.
This meant our state model for the system's workflow had to be as tight as could be, and that we explicitly looked to handle failures.
Admin
One thing that annoys me are the 100% code coverage zealots. Ok, test "critical" bits, but checking single line returns that return a member [variable] is a bit pointless. More so with Mock as you are returning the fake item which you know is valid or invalid.
Usually they are very easy to break in the real world, pass in a negative number/nan/null and watch the house of cards come crashing down.
I'd rather have more intelligent targeted tests instead of paint by numbers.
Hell in industry they don't test every single thing, they sample batches (e.g. bakers dozen). Unless of course you are making bolts for a nuclear reactor then you x-ray each of those bad boys!
I prefer to spend more time on battle hardening code for the real world which involves having real world data going into the system via the normal data channels, not through a mock or stand alone bit. (I suppose more like integration test but not quite).
Bottom line, qualitative vs quantitative testing.
Admin
That is not true. Or rather, the sentence in isolation is true since it only covers "the inherent risk of change", but the overall message is not. The text have two assumptions*:
and draws as a conclusion that "change is bad" from a risk perspective.
This is wrong. For instance changing the break pads on your car is not risk free. You might have bought wrong replacements or you might put them on incorrectly. However not changing the break pads is not risk free either (especially if you can hear them scream when being used!).
If you discover a off by one error in the source code, not correcting that bug might very well be more risky than correcting it. There is no way to say in a objective way; specific judgment is always required.
I am in no way saying that change is risk free. And I fully buy that even the most innocent change might turn out to have some unintended consequence. In fact even just changing comments might have a impact (not very likely but if the comment is shorter/longer the following LINE tokens will change values and now might be one digit more or less. If such a token is made into a string by the pre-processor the string is now one byte smaller/larger which might trigger that some nearby code is moved into a different memory segment. And that might have a non-negligible run-time impact).
But to assume that not changing the code is risk free is just so utterly wrong.
You should always aim for minimizing risk during maintenance, and that is NOT the same as minimizing change!
My interpretation of it, great if I am wrong on this but then the text is very imprecise in my opinion.