• Zatapatique (unregistered)
    Assert.True(new Post().IsFrist());
  • TheCPUWizard (unregistered)
    1. That is 0% code coverage [aka tool was set up wrong!]
    2. That type of test is one valuable type in an arsenal of test types. "Design Rule Checks" [DRC] are critical to keeping things inline. For example a "get only property" design should FAIL is someone adds a getter. [note I am talking about the type of tests, not specific test example]
  • Anonymous (unregistered)

    HasDefaultConstructor would again, be caught if it were used

    ...unless it is used via Reflection/Activator.CreateInstance()

  • (nodebb)

    Did they have any tests which ensured that comments don't get executed and that write space separates keywords? No? See, it's not 100%!!

  • Sauron (unregistered)

    I hate to say this, but that codebase is still better covered than most parts of the codebase of the company where I work :D

  • Oneway (unregistered)

    A testing framework or library that would mark an entire class as covered based on the given test examples should be taken out back and shot. Twice.

  • (nodebb)

    Ah yes I'm familiar with this pattern. 100% coverage is good, hell sometimes testing "basic" things isn't awful. But you have to know that testing is meant to test functionality, not behavior.

  • Tim (unregistered)

    I don't understand how you would go about configure a testing framework to show 100% test coverage without any tests which test the code, even if you wanted to

  • (nodebb)

    This ... might be the scariest manager story I have ever seen on this site.

    If this manager believes this class of test is actually helping keep bugs at bay, and (shudder) teaching that to any green developers who find their way into his group, we are looking at a Typhoid Mary level of danger, spreading out to other shops as people leave this one.

  • (nodebb) in reply to Oneway

    A testing framework or library that would mark an entire class as covered based on the given test examples should be taken out back and shot. Twice.

    With that handy GAU-8 you keep in your back pocket.

  • Dave Aronson (github)

    And this is exactly why mutation testing is so valuable. https://www.youtube.com/playlist?list=PLMrm16n64Bub8urB-bsyMyHiNPMLG7FAS is my playlist of versions of my presentation on it, in lengths from about 22 to 69 minutes, depending how in-depth you want to go.

  • Ex-Java dev (unregistered)

    I had a feeling something was up when I saw "100% test coverage". If they told me 80%, or 93%, it would sound good, but to get 100% you are either doing weird tests which barely touch the code but actually do nothing, as described, or everything is a slog.

    A type of testing I'd like to see more is behavior testing, where you are testing the overall inputs and outputs to the program, running it through scenarios at a high if not top level. This makes the tests more durable to refactors, since you weren't testing every stupid class, which is what usually happens with unit testing.

    Never got to that golden land myself, but a man can dream.

  • (nodebb)

    When I read test coverage I always have to cringe.

    Let's take this code as an easy example:

    public int Add(int a, int b) => a + b;
    

    Now if we write a unit test and feed it 10 different values we end up with 100% test coverage; great right, bug free code?

    Actually, no. Not at all in fact. We tested 10 combinations out of 2^65 possible once. Spitting in an ocean has more effect than this test, in fact spitting into the black hole at the center of the galaxy would have more effect.

    So (unit) tests are useless? Yes, obviously, at least in the way a lot of people think of them (aka those that care about test coverage).

    There is two awesome use cases where tests make sense:

    1. Preventing bugs from happening again (so basically bug coverage).

    2. Implementing a highly complex especially when certain not obvious acceptance criteria are required.

    3. Force developers that start coding without having a clue what to do from actually think about where they want to go once (that's called test first, part of TTD).

    While (1) is pretty much a no brainer and by far the most valuable. A bug fix without a test that makes sure that the bug never happens again is in my opinion not a bug fix, just a delay tactic. I think everyone has already been in projects where the same bug happens over and over again.

    On the other hand (2) is pretty situational and it is more a guard rail for the developer (and future developers modifying the code) than something that will result in better code. It's a tool to avoid to fall when on uncertain ground.

    The last one (3) is something that can be used to fix a bigger issue (the guy that writes code for a week and then announces it didn't out). If there's a dev on the team that doesn't know the road when starting coding then forcing code first will only patch the issue but not fix it. But sometimes there's no other way to deal with the situation, so it's better than nothing.

  • (nodebb)

    Thought longs wrote ints. Oops. For 32 bit values it would be 2^33. But I think longs have a nice ring to it because that's when the black hole works out nicely.

  • kythyria (unregistered)

    I wonder if this is someone coding with a dyntyped accent, where you can't check these things so easily (my suspicion is that a lot of unit testing popularity comes from dyntyped langs; the fancier your type system the less tests you need)

  • Officer Johnny Holzkopf (unregistered) in reply to MaxiTB

    Well, we here at INITECH understand test coverage as a means to measure software quality. We have 100% test coverage. We believe in tested classes, as you can see. Our software is bug-free and excellent, as proven by the test. Still, there is something strange... we have this class here that does addition, like in plus sign... and if I enter "5" and "7" I should get "12", but I get "-2". The "1" is replaced by a "-". Why is that? I mean, we have 100% test coverage and green lights in the dashboard, so our code can't be the problem. I will show you... let me check... here it says "sum = a - b;" but that can't be it, right? Because we have 100% test coverage. Now fix my PC, it must have a broken processor!

  • (nodebb)

    Good operational unit testing tests the corner cases. Which presupposes your devs can detect what corner cases are for the biz rules their class / method implements . e.g. for @MaxiTB's simple add 2 ints method, does the dev know which addends trigger over/underflow? If so, write tests to trigger them both. If not, they need more training before writing any test and therefore any code, since the tests must come first.

    My personal challenge with TDD is libraries and API's of whatever provenance not part of my own project. Any library /API is a black box which can fail or be buggy in mysterious ways. Unless you have a test which mocks all that unknown / unknowable buggy behavior, you don't know for certain that any of your methods which interface with that library will handle those bugs correctly. So any assertion of X% level of test coverage is simply BS.

  • (nodebb)

    Seen this before in USG contract requirements, 100% test coverage of all code paths, the Zeno's paradox of code both because you need to test every possible error path, many of which are almost impossible to trigger normally, and because the new code added for testing also needs test coverage. The vendor's response was to spend six months removing almost all error checking and special-case handling, and then they passed the 100% coverage requirement.

    So the code here may be WTF-ey, but the requirements were an even bigger WTF.

    Addendum 2024-10-22 01:38: In fact if what the manager had actually said was "unit testing is very important to the people who issued the lucrative contract to us, and we have 100% test coverage" then this approach would make perfect sense.

  • (nodebb) in reply to WTFGuy

    devs can detect what corner cases

    I challenge that idea with the main reason why we write test in the first place: It's a safe guard against mistakes we make, not against purposeful bad decisions. In other words most devs I know don't make bugs on purpose, they are not aware of them happening. So it doesn't matter which edge cases you are aware of in general because again, you could make an unintentional bad bet there as well.

    Keep in mind, I picked one of the simplest operation on purpose. Ignoring substraction (which is only adding a negative number), you have multiplications next with a way bigger range of under-/overflow. Next is division, which adds a dead area right at zero. So more complex operations add additional complexity, with negative exponents int range doesn't even work anymore, you need complex numbers. And I'm not even going into floating point numbers, NaN and infinite values both positive and negative, plus a negative zero. And to make matters worst, not only are the multiple floating point standards, some languages even don't follow them and you need to opt-in with a strict mode or you behavior might be not only unexpected but also dependent on the platform you are using the language. But all that doesn't matter, because real test scenarios not only feature simple functional algorithms, you have state, parallel processing, timings, cache behavior etc. to deal with too. So for a real word test you have usually dozens, if not millions of edge cases, depending on what you are testing. It's for sure more than 1 test per execution branch.

    Unit tests are pretty clear, you test the unit not external dependencies like libraries. That's what integration tests are for. And with them you have to trust the library to do the correct thing, otherwise you should switch to a version that works correctly or replace it. But you are 100% right, external dependencies are always a potential source of bugs that may not be covered by code coverage too.

  • TheCPUWizard (unregistered) in reply to DocMonster

    @Doc - STRONGLY disagree.. One type of testing is related to functionallity, another is related to performance [time, resource consumption], another related to design checking, and at least 4 more categories...

  • TheCPUWizard (unregistered)

    @WTFGuy - "My personal challenge with TDD is libraries and API's of whatever provenance not part of my own project. Any library /API is a black box which can fail or be buggy in mysterious ways. "...

    Yup.. working on robust testing for a library... hitting the "Real systems" has the typical problems so I use DI to place in altermate simulatations (I avoid using "Mocks" as a term due to some connotations)...Those have become complex [over 10K lines in total], so not I have a show set of tests to validate that the testing environment has not deviated from the known characteristics of the real external environments...

  • (nodebb)

    Time to validate the audit reports of the audit of the auditing process. Again. Sigh.

  • (nodebb) in reply to WTFGuy

    You really need to start writing a script for those ;-)

Leave a comment on “Perfect Test Coverage”

Log In or post as a guest

Replying to comment #:

« Return to Article