- Feature Articles
- CodeSOD
- Error'd
- Forums
-
Other Articles
- Random Article
- Other Series
- Alex's Soapbox
- Announcements
- Best of…
- Best of Email
- Best of the Sidebar
- Bring Your Own Code
- Coded Smorgasbord
- Mandatory Fun Day
- Off Topic
- Representative Line
- News Roundup
- Editor's Soapbox
- Software on the Rocks
- Souvenir Potpourri
- Sponsor Post
- Tales from the Interview
- The Daily WTF: Live
- Virtudyne
Admin
Frist remove failing comment. Retry.
Admin
When it's too hot, just get ride of the thermometer.
That's the way to improve quality too: no failed metric when there is no metric.
Admin
This clearly shows a lack of long-term vision. Removing all tests should provide a more stable solution.
Admin
The failing test could of course just be garbage and/or redundant and so it could be valid just to remove it. Whether it is a WTF depends on what discussions happened in the planning game.
Admin
Without more detail this may not be a WTF.
Maybe the test was faulty itself or it was irrelevant, because it tested under conditions that are otherwise excluded. We don't know.
Admin
I cured the worrying knock and rattle in the engine of my car by the simple expedient of turning up the volume of the radio.
Admin
There is absolutely no justification for this. What you got to do is fix the test so it is relevant. If it's redundant then it wont fail. If it's garbage then fix it so it's not.
Admin
Ignore it! We'll be fine.
Admin
This reduces the chance of accident because now you are more relaxed when you drive. :)
Admin
TRWTF is using post-it notes in 2014.
Admin
Dude, where's my jetpack? This future is totally bogus, dude.
Admin
Nothing wrong in this. In order to fix failing unit tests you need to first go into the history and determine why the unit test exists in the first place. Then you need to figure out whether you need it today or not(maybe the functionality for which the unit test exists is obsolete now).
It is a lot lazy and smart to delete the failing tests and later on add tests based on current existing functionality rather than trying to figure out why the unit test was existing and since when was it failing. It works in legacy systems where no one knows the entire functionality or has the time to do detective work.
Admin
Why remove when you can <Ignore()>?
Admin
Meh, I remove busted unit tests all the time. My team used to be really bad at writing tests, so a lot of them are crap.
TRWTF is the process nazi who took the time to write a post it note for that and walk it across a whiteboard.
Admin
Where I'm at, we had a bunch of unit tests that were working at one point; then we stopped running them. Now we're back to running them (and as part of our CI build, too!). However, in the interim, many of the tests stopped working. In the interest of getting the CI build to run the tests, we fixed what we could; the rest we marked as ignore. Now the tests all pass, the CI build is good, and we can come back and fix the broken tests.
It's not a crime to remove a broken test -- maybe the business requirements changed, and now the test is no longer relevant; or a logic change moved some functionality somewhere else. I've seen it happen many times, where the code is right but the test for it is broken.
Admin
Right. Because removed features should still be tested. And garbage tests (like testing that 1+1=3) are always possible to make useful without making redundant, even if you have a full, complete set of tests for the same feature. And redundant tests are always updated at the same time as their duplicates avoiding all failures, and it's good to have redundant tests because this improves maintainability of the test environment.
Admin
http://www.thereaction.net/news/y2010/m11/thomas_midgley_jr_tel_cfc.aspx
Akismet says NO!
Admin
My check-engine light came on the other day so I removed the light. I don't see the issue here.
Admin
Yes, it's wrong to simply remove the failing test. The point of unit tests is to identify problems when refactoring. Thus you should be updating the unit tests as you refactor.
Admin
Everyone, it may not be a real WTF, but you're missing the point. This post is simply a metaphor. It's a new year and we should all strive to remove the "failing tests" from our lives.
Admin
reminds me of the Colbert Report segment on China lowering the health standard to reduce the number of smog alerts
Admin
:'(
Admin
Admin
Without the context it could be meaningless. Perhaps the unit test is testing functionality that no longer exists or was moved to another layer. Or maybe it is such an insignificant minor issue that it isn't worth holding up release for.
Admin
Admin
No. It depends what they mean by failing. If "failing" means "This test is valid, but fails.", then it's negligent to remove it. If it means "this test fails, but it's just because we should never have created that test to begin with, since it makes no sense now that we've tried it on a real system and thought the whole scenario through more", then no, it's right to remove it. There's simply not enough information here to rush to judgements.
Admin
Stop introducing common sense! It has NO place in these comments.
Admin
You know how they say: "You either have to be part of the solution, or you're going to be part of the problem."
A test is never part of the solution...
Admin
I don't know which is worse: That someone willingly put this as a work item, that devs actually completed it, or that a team smart enough to use a Kanban board doesn't know that this is a VERY BAD IDEA.
Admin
Don't most testing frameworks have a way to skip a test with a message? That's what should be done if a test is no longer relevant. The way the article is written it's almost clear that it's the "We can't ship software with failing unit tests, so remove the failing unit tests" mentality which is just wrong.
Admin
It's called source control. That's where you keep deprecated code.
Admin
Admin
Exactly. It might be the TEST that's failing, not the code. There's nothing magical about test code that makes it any less likely to have a bug than the code it's testing.
Admin
Or more sacred.
If you can prune redundant code, why shouldn't you prune redundant tests?
Not that we know, if the removed test was redundant.
Admin
I'm particularly fond of using a pair of end wire cutters to squeeze the magnets out of those magnet toys that come with the steel balls. A few hundred of those babies and you won't need post-it notes anymore, you can make the whole message out of magnet pixels!
Also, here is a web site to try if you are looking for a reason to gouge your eyeballs out: http://www.whimsie.com/metalcrafttools.html Also, Aksimet sucks.
Admin
Admin
Admin
Sir, your comment is the real WTF. I lol'd
Admin
We don't need QA just auto testes and the coders just code to pass the tests.
Let the end users be the beta testers.
and now upper management will be bonus for cutting costs from the layoffs of QA.
Admin
This comment failed so it was removed.
Admin
Seems excessive, why didn't you just shut off the engine?
Admin
Right. Because knowing about a problem isn't the very first step of the solution. And knowing that the solution does what it's supposed to isn't part of the solution.
You must be one of those idiots taught by idiot professors that testers are the enemy of dev.
Admin
It is possible that the test case is failing either because the test case is wrong, or because it's testing for functionality that was removed from the spec.
Unlikely, perhaps, but in a well-organised IT shop (they exist, right?) it's possible that tests were prepared for an original spec before the deliverables were cut back. If there are strict QA people (who don't necessarily understand what's going on) then failing tests concern them even if it's because the test is looking for something that never happened. Of course, an easier solution is to just change the expected result of the test....
Admin
Requirement 3: Allow multiplication functionality Test 7: Make sure that multiplication functionality works
Somewhere during the budget blowouts: Cut Requirement 3, they can just do a series of additions, they don't actually need to multiply.
Question for Herr Westwood: Would you change the test to make sure that multiple additions get the required result, or dump the test because it is testing for functionality that is no longer being implemented.
Admin
True story (and I'm sure other here have done the same).
We have an inhouse monitoring tool for one of our applications. Among other things it does, is it checks availability of multiple servers (by sending a simple request, and returning the time the request took to complete). This occurs every few minutes and the result is displayed on a GUI so that non-technical people (NTP) can feel some comfort that the world is a happy place. Some years ago, a change was implemented that increased this ping value, and suddenly in the GUI is was above some threshold which meant it was displayed in red. The red really concerned the NTP, who were convinced the world was ending, because "there's a lot of red on our monitor". After some discussion with various technical stakeholders (which basically revolved around "it is what it is"), the decision was made to raise the threshold so that the monitor would show less red. Suddenly the world is a happy place again, with rainbows and chirpie birds.
Short Story: Only use Red when there's a serious problem (and be sparing wioth oranges and yellows which might have NTP thinking they're on the verge of a problem), otherwise people panic. Green is a much more friendly colour.
Admin
Mark it as ignore, you fool, so that when the customer changes his mind again you don't have to go and redesign the bloody test. Fucking hell, born yesterday were you?
Admin
This is how I ran GW-BASIC / BASICA programs back in the day:
Repeat until you have something working...
Admin
If you decide to remove a feature, remove the damned feature, and that includes the tests. Unused code is subject to bitrot, and is liable to cause bugs down the road.
Admin
The test still exists in Version Control if we ever need it again. I say remove.
Admin
I worked in a team where people tought that if a test fails the guilt is of the test writer.
That project failed after a few months.