• Delenn (unregistered)

    Frist remove failing comment. Retry.

  • Le Forgeron (unregistered)

    When it's too hot, just get ride of the thermometer.

    That's the way to improve quality too: no failed metric when there is no metric.

  • GD (unregistered)

    This clearly shows a lack of long-term vision. Removing all tests should provide a more stable solution.

  • The Mole (unregistered)

    The failing test could of course just be garbage and/or redundant and so it could be valid just to remove it. Whether it is a WTF depends on what discussions happened in the planning game.

  • TopTension (unregistered)

    Without more detail this may not be a WTF.

    Maybe the test was faulty itself or it was irrelevant, because it tested under conditions that are otherwise excluded. We don't know.

  • Matt Westwood (cs)

    I cured the worrying knock and rattle in the engine of my car by the simple expedient of turning up the volume of the radio.

  • Matt Westwood (cs) in reply to The Mole
    The Mole:
    The failing test could of course just be garbage and/or redundant and so it could be valid just to remove it. Whether it is a WTF depends on what discussions happened in the planning game.

    There is absolutely no justification for this. What you got to do is fix the test so it is relevant. If it's redundant then it wont fail. If it's garbage then fix it so it's not.

  • Zefram Cochrane (unregistered)

    Ignore it! We'll be fine.

  • Michael (unregistered) in reply to Matt Westwood

    This reduces the chance of accident because now you are more relaxed when you drive. :)

  • lucidfox (cs)

    TRWTF is using post-it notes in 2014.

  • Dude (unregistered) in reply to lucidfox

    Dude, where's my jetpack? This future is totally bogus, dude.

  • minusSeven (unregistered)

    Nothing wrong in this. In order to fix failing unit tests you need to first go into the history and determine why the unit test exists in the first place. Then you need to figure out whether you need it today or not(maybe the functionality for which the unit test exists is obsolete now).

    It is a lot lazy and smart to delete the failing tests and later on add tests based on current existing functionality rather than trying to figure out why the unit test was existing and since when was it failing. It works in legacy systems where no one knows the entire functionality or has the time to do detective work.

  • Smug Unix User (unregistered)

    Why remove when you can <Ignore()>?

  • jaffa creole (unregistered)

    Meh, I remove busted unit tests all the time. My team used to be really bad at writing tests, so a lot of them are crap.

    TRWTF is the process nazi who took the time to write a post it note for that and walk it across a whiteboard.

  • DrPepper (cs)

    Where I'm at, we had a bunch of unit tests that were working at one point; then we stopped running them. Now we're back to running them (and as part of our CI build, too!). However, in the interim, many of the tests stopped working. In the interest of getting the CI build to run the tests, we fixed what we could; the rest we marked as ignore. Now the tests all pass, the CI build is good, and we can come back and fix the broken tests.

    It's not a crime to remove a broken test -- maybe the business requirements changed, and now the test is no longer relevant; or a logic change moved some functionality somewhere else. I've seen it happen many times, where the code is right but the test for it is broken.

  • nobody (unregistered) in reply to Matt Westwood
    Matt Westwood:
    The Mole:
    The failing test could of course just be garbage and/or redundant and so it could be valid just to remove it. Whether it is a WTF depends on what discussions happened in the planning game.

    There is absolutely no justification for this. What you got to do is fix the test so it is relevant. If it's redundant then it wont fail. If it's garbage then fix it so it's not.

    Right. Because removed features should still be tested. And garbage tests (like testing that 1+1=3) are always possible to make useful without making redundant, even if you have a full, complete set of tests for the same feature. And redundant tests are always updated at the same time as their duplicates avoiding all failures, and it's good to have redundant tests because this improves maintainability of the test environment.

  • Jakob (unregistered) in reply to Matt Westwood
    Comment held for moderation.
  • Kevin (unregistered)

    My check-engine light came on the other day so I removed the light. I don't see the issue here.

  • Meep (unregistered)

    Yes, it's wrong to simply remove the failing test. The point of unit tests is to identify problems when refactoring. Thus you should be updating the unit tests as you refactor.

  • Inspired (unregistered)

    Everyone, it may not be a real WTF, but you're missing the point. This post is simply a metaphor. It's a new year and we should all strive to remove the "failing tests" from our lives.

  • TC (unregistered)

    reminds me of the Colbert Report segment on China lowering the health standard to reduce the number of smog alerts

  • Jeff Grigg (unregistered)

    :'(

  • NamingException (cs) in reply to lucidfox
    lucidfox:
    TRWTF is using post-it notes in 2014.
    +1
  • accident (unregistered)

    Without the context it could be meaningless. Perhaps the unit test is testing functionality that no longer exists or was moved to another layer. Or maybe it is such an insignificant minor issue that it isn't worth holding up release for.

  • foo AKA fooo (unregistered) in reply to Jakob
    Comment held for moderation.
  • Guest (unregistered) in reply to Matt Westwood

    No. It depends what they mean by failing. If "failing" means "This test is valid, but fails.", then it's negligent to remove it. If it means "this test fails, but it's just because we should never have created that test to begin with, since it makes no sense now that we've tried it on a real system and thought the whole scenario through more", then no, it's right to remove it. There's simply not enough information here to rush to judgements.

  • Egbert (unregistered) in reply to The Mole
    The Mole:
    The failing test could of course just be garbage and/or redundant and so it could be valid just to remove it. Whether it is a WTF depends on what discussions happened in the planning game.

    Stop introducing common sense! It has NO place in these comments.

  • ANON (unregistered)

    You know how they say: "You either have to be part of the solution, or you're going to be part of the problem."

    A test is never part of the solution...

  • ObiWayneKenobi (cs)

    I don't know which is worse: That someone willingly put this as a work item, that devs actually completed it, or that a team smart enough to use a Kanban board doesn't know that this is a VERY BAD IDEA.

  • ObiWayneKenobi (cs) in reply to Guest
    Guest:
    No. It depends what they mean by failing. If "failing" means "This test is valid, but fails.", then it's negligent to remove it. If it means "this test fails, but it's just because we should never have created that test to begin with, since it makes no sense now that we've tried it on a real system and thought the whole scenario through more", then no, it's right to remove it. There's simply not enough information here to rush to judgements.

    Don't most testing frameworks have a way to skip a test with a message? That's what should be done if a test is no longer relevant. The way the article is written it's almost clear that it's the "We can't ship software with failing unit tests, so remove the failing unit tests" mentality which is just wrong.

  • Valued Service (unregistered) in reply to ObiWayneKenobi
    ObiWayneKenobi:
    Don't most testing frameworks have a way to skip a test with a message? That's what should be done if a test is no longer relevant.
    Testing for long last name failure removed. No longer using system that has last name restriction. Testing for short date failure removed. No longer using legacy system that can only support 2 digit years. Testing for emailing to ex-VP removed. He no longer works here. Testing for... removed... Testing for... removed... Testing for... removed... (20 times over).

    It's called source control. That's where you keep deprecated code.

  • herby (cs) in reply to Kevin
    Kevin:
    My check-engine light came on the other day so I removed the light. I don't see the issue here.
    Just remember: Penny's "check engine" light is lit!
  • Loren Pechtel (cs) in reply to TopTension
    TopTension:
    Without more detail this may not be a WTF.

    Maybe the test was faulty itself or it was irrelevant, because it tested under conditions that are otherwise excluded. We don't know.

    Exactly. It might be the TEST that's failing, not the code. There's nothing magical about test code that makes it any less likely to have a bug than the code it's testing.

  • TopTension (unregistered)

    Or more sacred.

    If you can prune redundant code, why shouldn't you prune redundant tests?

    Not that we know, if the removed test was redundant.

  • ¯\(°_o)/¯ I DUNNO LOL (unregistered) in reply to lucidfox
    Comment held for moderation.
  • Wrexham (unregistered) in reply to ANON
    ANON:
    "You either have to be part of the solution, or you're going to be part of the problem."
    Nah. You're either part of the solution, or you're part of the precipitate.
  • asdfg (unregistered) in reply to Valued Service
    ObiWayneKenobi:
    Testing for long last name failure removed. No longer using system that has last name restriction. Testing for short date failure removed. No longer using legacy system that can only support 2 digit years.
    Since a large part of what you are doing with unit tests is to catch regressions neither of these are valid IMO. Sure they pass now, but when someone makes a change 2 years from now that breaks long last names you will be happy you still have the unit test.
  • AngryMadCat (unregistered) in reply to Inspired

    Sir, your comment is the real WTF. I lol'd

  • Joe (unregistered)

    We don't need QA just auto testes and the coders just code to pass the tests.

    Let the end users be the beta testers.

    and now upper management will be bonus for cutting costs from the layoffs of QA.

  • RFoxmich (unregistered)

    This comment failed so it was removed.

  • Mike L (unregistered) in reply to Matt Westwood
    Matt Westwood:
    I cured the worrying knock and rattle in the engine of my car by the simple expedient of turning up the volume of the radio.

    Seems excessive, why didn't you just shut off the engine?

  • nobody (unregistered) in reply to ANON
    ANON:
    You know how they say: "You either have to be part of the solution, or you're going to be part of the problem."

    A test is never part of the solution...

    Right. Because knowing about a problem isn't the very first step of the solution. And knowing that the solution does what it's supposed to isn't part of the solution.

    You must be one of those idiots taught by idiot professors that testers are the enemy of dev.

  • Devil's Advocate (unregistered)

    It is possible that the test case is failing either because the test case is wrong, or because it's testing for functionality that was removed from the spec.

    Unlikely, perhaps, but in a well-organised IT shop (they exist, right?) it's possible that tests were prepared for an original spec before the deliverables were cut back. If there are strict QA people (who don't necessarily understand what's going on) then failing tests concern them even if it's because the test is looking for something that never happened. Of course, an easier solution is to just change the expected result of the test....

  • Devil's Advocate (unregistered) in reply to Matt Westwood
    Matt Westwood:
    The Mole:
    The failing test could of course just be garbage and/or redundant and so it could be valid just to remove it. Whether it is a WTF depends on what discussions happened in the planning game.

    There is absolutely no justification for this. What you got to do is fix the test so it is relevant. If it's redundant then it wont fail. If it's garbage then fix it so it's not.

    Project: Reverse Polish calculator (implemented by simpletons, in this instance)

    Requirement 3: Allow multiplication functionality Test 7: Make sure that multiplication functionality works

    Somewhere during the budget blowouts: Cut Requirement 3, they can just do a series of additions, they don't actually need to multiply.

    Question for Herr Westwood: Would you change the test to make sure that multiple additions get the required result, or dump the test because it is testing for functionality that is no longer being implemented.

  • Mickey (unregistered)

    True story (and I'm sure other here have done the same).

    We have an inhouse monitoring tool for one of our applications. Among other things it does, is it checks availability of multiple servers (by sending a simple request, and returning the time the request took to complete). This occurs every few minutes and the result is displayed on a GUI so that non-technical people (NTP) can feel some comfort that the world is a happy place. Some years ago, a change was implemented that increased this ping value, and suddenly in the GUI is was above some threshold which meant it was displayed in red. The red really concerned the NTP, who were convinced the world was ending, because "there's a lot of red on our monitor". After some discussion with various technical stakeholders (which basically revolved around "it is what it is"), the decision was made to raise the threshold so that the monitor would show less red. Suddenly the world is a happy place again, with rainbows and chirpie birds.

    Short Story: Only use Red when there's a serious problem (and be sparing wioth oranges and yellows which might have NTP thinking they're on the verge of a problem), otherwise people panic. Green is a much more friendly colour.

  • Matt Westwood (cs) in reply to Devil's Advocate
    Devil's Advocate:
    Matt Westwood:
    The Mole:
    The failing test could of course just be garbage and/or redundant and so it could be valid just to remove it. Whether it is a WTF depends on what discussions happened in the planning game.

    There is absolutely no justification for this. What you got to do is fix the test so it is relevant. If it's redundant then it wont fail. If it's garbage then fix it so it's not.

    Project: Reverse Polish calculator (implemented by simpletons, in this instance)

    Requirement 3: Allow multiplication functionality Test 7: Make sure that multiplication functionality works

    Somewhere during the budget blowouts: Cut Requirement 3, they can just do a series of additions, they don't actually need to multiply.

    Question for Herr Westwood: Would you change the test to make sure that multiple additions get the required result, or dump the test because it is testing for functionality that is no longer being implemented.

    Mark it as ignore, you fool, so that when the customer changes his mind again you don't have to go and redesign the bloody test. Fucking hell, born yesterday were you?

  • jimshatt (cs)

    This is how I ran GW-BASIC / BASICA programs back in the day:

    RUN ERROR ON LINE 30 30 RUN ERROR ON LINE 80 80 RUN

    Repeat until you have something working...

  • Meep (unregistered) in reply to Devil's Advocate
    Devil's Advocate:
    Matt Westwood:
    The Mole:
    The failing test could of course just be garbage and/or redundant and so it could be valid just to remove it. Whether it is a WTF depends on what discussions happened in the planning game.

    There is absolutely no justification for this. What you got to do is fix the test so it is relevant. If it's redundant then it wont fail. If it's garbage then fix it so it's not.

    Project: Reverse Polish calculator (implemented by simpletons, in this instance)

    Requirement 3: Allow multiplication functionality Test 7: Make sure that multiplication functionality works

    Somewhere during the budget blowouts: Cut Requirement 3, they can just do a series of additions, they don't actually need to multiply.

    Question for Herr Westwood: Would you change the test to make sure that multiple additions get the required result, or dump the test because it is testing for functionality that is no longer being implemented.

    If you decide to remove a feature, remove the damned feature, and that includes the tests. Unused code is subject to bitrot, and is liable to cause bugs down the road.

  • Devil's Advocate (unregistered) in reply to Matt Westwood
    Matt Westwood:
    Devil's Advocate:
    Matt Westwood:
    The Mole:
    The failing test could of course just be garbage and/or redundant and so it could be valid just to remove it. Whether it is a WTF depends on what discussions happened in the planning game.

    There is absolutely no justification for this. What you got to do is fix the test so it is relevant. If it's redundant then it wont fail. If it's garbage then fix it so it's not.

    Project: Reverse Polish calculator (implemented by simpletons, in this instance)

    Requirement 3: Allow multiplication functionality Test 7: Make sure that multiplication functionality works

    Somewhere during the budget blowouts: Cut Requirement 3, they can just do a series of additions, they don't actually need to multiply.

    Question for Herr Westwood: Would you change the test to make sure that multiple additions get the required result, or dump the test because it is testing for functionality that is no longer being implemented.

    Mark it as ignore, you fool, so that when the customer changes his mind again you don't have to go and redesign the bloody test. Fucking hell, born yesterday were you?

    While I suspect you may be trolling, I'd rather assume you're a fucking idiot.

    The test still exists in Version Control if we ever need it again. I say remove.

  • Lorenzo (unregistered)

    I worked in a team where people tought that if a test fails the guilt is of the test writer.

    That project failed after a few months.

Leave a comment on “Productive Testing”

Log In or post as a guest

Replying to comment #:

« Return to Article