- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
Mortgage Driven Development!
Admin
Don't know what to write. //Fix this comment later
Admin
I have seen similar naming conventions - but if I apply it to this it means "TestCase 119a" - so you are supposed to look into the document describing testcases. Still bad code, but refering to a TestCase is not too bad in my book... it forces ppl to read them.
Admin
The number is cumbersome, but the test case should be copied in the comment.
Anyway, there's not always a need for an assert in a test case. If Hold or Retrieve throw an exception, the test fails.
Admin
Then at the very minimum document/comment that in the test. All it takes is:
// Wot? No assertions? Test is considered a pass if no exceptions raised.
Just those few words save the future developer the headache of trying to figure out the original developers' intentions.
Admin
thrixth!
Admin
Personally, I always include assertions, even if all I'm testing is that no exceptions were raised (or that the correct exception was raised). Assert.IsTrue(true) seems like a silly statement to write, but it's a nice indicator that if I hit this line, the test is currently passing. I can put an Assert.IsTrue(false) in the catch block for exceptions.
Admin
Admin
Admin
Admin
Admin
some frameworks let you specify which exceptions are expected, why not use that then...
Admin
Not necessarily. I've been on projects which are documented to an insane level of detail, such that the actual paragraph numbers in that specification are expected to be recorded (either by comment or, more usually and insanely, by method naming convention) in the code itself.
What you often find, in such environment, is that the senior developers are the ones writing that specification, while the juniors and monkeys are set with the exercise of converting those specs to working code. And of course the unit tests are subject to the same level of management, but for some reason the actual coders can't find the appropriate level of enthusiasm for that task ...
Admin
No, that is the biggest WTF today. Terrible terrible terrible.
Slipping into girlfriend speak now: "If you don't know why then there is no point in me explaining it to you"
Admin
TC14 - Assert.AreEqual(TC14,C3PO)
Admin
Admin
That's a bugbear of mine - unit tests that swallow a perfectly usable exception and replace it with a completely generic "oh, something broke". The original exception should at a minimum be logged somewhere first. Unit tests shouldn't be written to purposefully frustrate the person who has to diagnose the failures...
Admin
Admin
Since the test is not asserting anything, and no one knows what it's testing, and it only fails on exception,
then the obvious solution is to use the try, catch, discard pattern of exception handling. No exceptions reported = no worries.
Admin
(Not talking about your other antipatterns, such as obscuring the code flow with unnecessary booleans, or using booleans with inverted semantics ...)
Admin
Is it the most elegant way? Probably not, but maybe the function being checked can't be tested in an elegant fashion.
Admin
TC117a() // // Tests: "are you at your post?" // // Expected behavior: True // // Possible failure cases: // // Bad communicator // Too short to be a Stormtrooper. //
Admin
Correct me if I'm wrong but... a test case should assert that something occurred. If you just let it run and the pass condition is "No errors" then you aren't really testing anything that you can verify, because you aren't checking that Foo was changed to Bar or whatever you expect to happen.
Admin
This is what happens when you rely on auto-generated reports from "management" tools that just report pass/fail of unit tests and code coverage percentages, rather than actually doing statistical sampling reviews of developers' work.
A little pair-programming and mentoring go a long way, especially if code reviews are randomly done. ("No one expects the Spanish Code Review" - contemporized Monty Python)
Admin
Consider: You have a class method that turns Foo to Bar. You build a test that passes Foo to your method, and an Assert that the output is Bar. If your code works, your test passes (Green light).
Now, let's say six months from now, you add another method to the same class, or you change the functionality of your method. You write a test for this new functionality, and it passes (Green light). HOWEVER, this change altered the previous functionality, and now Foo returns DoubleFoo. This causes your first test to fail (Red light).
For me, the main advantage is not that my most recent changes get green lights, but that all PREVIOUS tests get green lights as well. I tend to break things I wrote six months ago, which is why testing is so useful. I might not notice the change to the earlier functionality otherwise.
Admin
Admin
I disagree with your first statement. That's akin to copying code - A reference to the test case is best (and ideally a version of the document). Otherwise you run the risk of the test requirement changing and the comment not changing. I hate copying anything!
Admin
+1
Admin
I always use assert for results:
(pseudocode, SFDC-style)
I'd only use try/catch when you EXPECT an exception. Otherwise, let it cause your test to fail on its own.
Admin
Matt, the door is that way, and don't let it hit you on the way out.
Admin
Wooooooooooow. Completely barking up the wrong tree.
Addendum (2014-02-13 13:49): Think about it: you've inherited an app at a small company.
You update the ConvertFtoC() and ConvertCtoF() methods to raise an exception if values below absolute zero are passed to them. You run the tests. You get this:
Test failed. Value passed to ConvertFtoC(): -40. Expected result: 57. Actual result: -40.
You wouldn't look into the unit test?
Admin
Conversely, the test could describe what it is testing and you could refer to that directly instead............and you'd already be reading it.
Admin
Yeah, if you have the test named "Test42" just have a comment saying "check Feature Request 2348 for what this means", you open up the possibility of losing that documentation since it's stored separately. The unit test should have a meaningful name and some comments to at least give you an idea of what it should do if it's not obvious, although it doesn't have to hvae all of the documentation.
Admin
Admin
To be fair, the unit tests provided in the article are just useless.
Admin
Who tests the test code?
Admin
Admin
Admin
Are we going to start the comments on unit tests again?
Admin
Admin
Nothing broke is easy - that just means we didn't throw a whole bunch of exception. Something happened can be trickier, but each method/function/subroutine has a purpose, and we need to make sure that it is fulfilling that purpose.
The assertion measures that the result of the functionality was what you expected, and an exception means that there's a problem that doesn't even let the functionality execute - irrespective of whether it theoretically works (oops, we ran out of memory).
Admin
I have worked somewhere where failing test cases are either ignored, justified or modified so that the don';t fail (that's right, we change the test case, not what it's testing)....Then again, they also used to wax on about Six Sigma - except they'd talk about defects vs potential defects - and I figured this would always give a false sense of security, because by identifying a potential defect you would normally protect against it....the real skilkl would be finding potential defects that you didn't think of....if you know what I mean
Admin
as we know, there are known knowns; there are things we know that we know. there are known unknowns; that is to say, there are things that we now know we don't know. but there are also unknown unknowns – there are things we do not know we don't know.
Admin
Hmmm...
Admin
Yes, but I wasn't aware that I knew that.
Admin
Admin
The tester's tests test the testee's code, while the testee's code tests the tester's tests.
Of course continual testing can make testees testy. Peer programming can make this effect worse, as you can end up with testy testees clashing against each other like a Newton's cradle with 2 balls.
Admin
aaahhhahahahahahahahaha
Admin
Admin
Oh you fool -- it's not standing on anything! It's swimming!