• QJo (unregistered) in reply to chubertdev
    chubertdev:
    "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --Brian Kernighan

    The same is probably true of writing unit tests for your code.

    I would dispute Kernighan's statement. Badly-written code (which wasn't hard to write) is often horribly hard to debug, while well-written, well-designed code (which is considerably harder to write) may well (in my experience, at least) be a lot easier to debug.

  • Jon (unregistered) in reply to chubertdev
    chubertdev:
    "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --Brian Kernighan

    The same is probably true of writing unit tests for your code.

    By "writing unit tests for your code", you do it the wrong way. You should write code for your unit tests.

    -Jon who admits he do it the wrong way sometimes.

  • Brendan (unregistered) in reply to Jon

    Hi,

    Jon:
    By "writing unit tests for your code", you do it the wrong way. You should write code for your unit tests.

    That's an awesome idea! That way, instead of returning fake results to real code, you could return real results to fake code - it would save a lot of development time.

    • Brendan
  • (cs) in reply to QJo
    QJo:
    chubertdev:
    "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --Brian Kernighan

    The same is probably true of writing unit tests for your code.

    I would dispute Kernighan's statement. Badly-written code (which wasn't hard to write) is often horribly hard to debug, while well-written, well-designed code (which is considerably harder to write) may well (in my experience, at least) be a lot easier to debug.

    So in essence you are disputing Kernighan's assertion that the multipler ("twice as hard") is independent of the nature of the source code. I would tend to agree (up to a point), but I'd also assert that:

    • The multiplier is always greater than one. (That is, debugging code is always harder than writing it.)
    • Hard-to-write code ("clever" in his case, "well-written, well designed" in yours) will require debugging only in the more difficult edge cases, where the multiplier is always larger...
  • gnasher729 (unregistered) in reply to QJo
    QJo:
    I would dispute Kernighan's statement. Badly-written code (which wasn't hard to write) is often horribly hard to debug, while well-written, well-designed code (which is considerably harder to write) may well (in my experience, at least) be a lot easier to debug.
    There are different levels. It is easier to write 10 lines of bad code than 10 lines of good code. Problem is that 10 lines of bad code usually don't work, so you add another 10 lines and another 10 lines and so on until you can't easily find things where the code will go wrong. So you end up with lots of code that still doesn't work quite right and is hard to debug. Debugging usually means: Compare what the code does with what you think it is supposed to do. With a mess of code where you don't actually have an idea what each line is supposed to do, that's hard.

    Kernighan probably assumed you are at the level where you can write good code. At that point, starting with good, working code that is easy to debug, you can make it more complicated while still correct in order to make it faster, or shorter, or (worst case) to make yourself look clever. And that is often counter productive.

  • QJo (unregistered) in reply to gnasher729
    gnasher729:
    QJo:
    I would dispute Kernighan's statement. Badly-written code (which wasn't hard to write) is often horribly hard to debug, while well-written, well-designed code (which is considerably harder to write) may well (in my experience, at least) be a lot easier to debug.
    There are different levels. It is easier to write 10 lines of bad code than 10 lines of good code. Problem is that 10 lines of bad code usually don't work, so you add another 10 lines and another 10 lines and so on until you can't easily find things where the code will go wrong. So you end up with lots of code that still doesn't work quite right and is hard to debug. Debugging usually means: Compare what the code does with what you think it is supposed to do. With a mess of code where you don't actually have an idea what each line is supposed to do, that's hard.

    Kernighan probably assumed you are at the level where you can write good code. At that point, starting with good, working code that is easy to debug, you can make it more complicated while still correct in order to make it faster, or shorter, or (worst case) to make yourself look clever. And that is often counter productive.

    +1

    It's also worth pointing out that it being "harder to debug" may not so much mean it's "harder to accomplish" as that it "takes more time".

    So if it takes n hours to write a program, it being "twice as hard to debug" could equally well be interpreted as "takes 2n hours to debug".

    OTOH the multiplier is guesswork and little more than just a random number selected by Kernighan from thin air anyway. So his sentence can be taken no more seriously than any other political soundbite (which is all it can ever be).

  • QJo (unregistered) in reply to Brendan
    Brendan:
    Hi,
    Jon:
    By "writing unit tests for your code", you do it the wrong way. You should write code for your unit tests.

    That's an awesome idea! That way, instead of returning fake results to real code, you could return real results to fake code - it would save a lot of development time.

    • Brendan

    One assumes that the code is written to the specification. In which case, the specification should determine what the behaviour of the unit tests should do even before you cut a single line of implementation code. So Jon's suggestion is not as silly as it sounds to the programmatically naive.

  • zbxvc (unregistered) in reply to QJo
    QJo:
    gnasher729:
    QJo:
    I would dispute Kernighan's statement. Badly-written code (which wasn't hard to write) is often horribly hard to debug, while well-written, well-designed code (which is considerably harder to write) may well (in my experience, at least) be a lot easier to debug.
    [...]

    Kernighan probably assumed you are at the level where you can write good code. At that point, starting with good, working code that is easy to debug, you can make it more complicated while still correct in order to make it faster, or shorter, or (worst case) to make yourself look clever. And that is often counter productive.

    +1

    It's also worth pointing out that it being "harder to debug" may not so much mean it's "harder to accomplish" as that it "takes more time".

    But that would be in contradiction with the second part of Kernighan's statement:

    chubertdev:
    "[...] you are, by definition, not smart enough to debug it." --Brian Kernighan

    Clearly it's not about the time it would take but about how you're not able to do it at all.

    My opionion is that it is simply not related, at least not as directly as someone would like to see it. It seems some people, even though being able to - somewhat - program, are not able to debug their own programs at all, while for some other people it's easier to find bugs in programs already written by others than to implement the same functionality themselves from scratch.

    QJo:
    [...]So his sentence can be taken no more seriously than any other political soundbite (which is all it can ever be).

    +1

  • anonymous (unregistered)

    I'm not really experienced into extensive automated testing (we're adding it to our project) so I may have made a bad decision somewhere, but during my research phase on how to do good testing I saw little return for a lot of trouble when trying to unit test DAOs (and even business logic to some extent). It takes a lot of effort to mock stuff, and does not really test the correctness of the queries.

    Our approach? Unit test what is really a unitary piece of code (converters, tools, helpers and the like), and leave the rest to integration testing. First we used H2 as a database, but soon the SQL parser hit a wall, so we switched to a real test database, which is truncated and then put back into a known state through scripts and/or code on start. So our DAOs are carefully tested, and like so going up the chain using the previously tested layers (DAO-DB, Service-DAO-DB, Interface-Service-DAO-DB).

    Run time increased, but now we are sure that the queries will be accepted in production, and we saved lots of time mocking stuff. Downside is that unit tests are far less likely to catch a bug, which will only be discovered when running full integration testing (once or twice a day). Might be less extensive than unit testing everything, but so far it proved to be a reasonable tradeoff between functionality/coverage and human effort.

  • faoileag (unregistered) in reply to mangobrain
    mangobrain:
    faoileag, your suggestions are continually missing the point, which is that the SQL statements are a core part of the DAO's functionality, and they may be incorrect. You can't test the DAO by doing a string compare on "expected SQL statement" and returning "expected results" and then claim the DAO works, because you haven't actually verified that the SQL statement on a real database would return those results - you've just hard-coded the assumption into your test harness!
    mangobrain, you are continuously missing the point of unit tests and confusing them with functionality tests.

    A unit test tests a method or a function in the sense that with given input A the function/method should return B (or put the object in a well defined state).

    As I have stated before, with unit tests on the DAOs you test their getData() method, as it is the only method they provide. You can add a unit test that tests the sql statement in the DAO in the sense that it compares the string against a given master string.

    But that is all you can do with unit tests on a DAO.

    That the DAO delivers the correct data is part of functional tests; you can automate those (and should) but you normaly don't run them on every submit.

    mangobrain:
    but my point is clearly not getting through
    Because I think it is wrong.
    mangobrain:
    despite being understood by at least two others here.
    That people agree with you does not necessariliy mean you are right.
  • faoileag (unregistered) in reply to chubertdev
    chubertdev:
    "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --Brian Kernighan The same is probably true of writing unit tests for your code.
    Even more so. You develop your code based on your assumptions. And you also develop your unit tests based on your assumptions. It is thus very easy to replicate an assumption-based error in your code in the unit tests. So the unit tests will not tell you that the code is wrong.

    Think leap years. A year is a leap year if it can be divided by 4 without remainder. Unless it can also be divided by 100 without remainder. But if you can divide it also by 400 without remainder, than all of a sudden it is a leap year agin.

    If you don't know the third rule, your isLeapYear(2000) will return false. But since you assume that to be the correct case anyway, you might test like assert_false(isLeapYear(2000)).

    A peer-review of your unit-tests might be the road to take here, as it would at least find the more obvious problems. But with complex algorithms I don't know.

  • Meep (unregistered) in reply to mangobrain
    mangobrain:
    faoileag:
    But he does not know (or does not care) that what you stub in such a case is Connection and PreparedStatement so that "ps.executeQuery();" delivers a suitable List depending on the statement in the sql field.

    I don't have experience with the PreparedStatement interface, so I may be missing something here, but in order to stub it realistically - and actually test the SQL as written, instead of simply returning canned data from one layer down - wouldn't you effectively need to write an SQL interpreter? Which is itself a large body of code, warranting its own tests...

    You can pass basic SQL to an in-memory database.

    But if you're using stored procedures, you're really not testing the behavior of the SQL parser, just that you spelled the stored procedure correctly. It's trivial to mock your own result set object.

    You could then do a separate installation of your DBMS to set up fake data and test your stored procedures.

  • Rudolf (unregistered) in reply to Brendan
    Brendan:
    Hi,
    Jon:
    By "writing unit tests for your code", you do it the wrong way. You should write code for your unit tests.

    That's an awesome idea! That way, instead of returning fake results to real code, you could return real results to fake code - it would save a lot of development time.

    • Brendan

    (AIUI) What Jon is talking about is TDD. This is where the specification basically defines the behaviour. So, you write tests to check the behaviour, then write code to produce that behaviour.

    This does require some honesty from the developers. For instance, my unit tests for a simple string function could be:

    _ASSERT(mystrlen("hello") == 5); _ASSERT(mystrlen("goodbye") == 7);

    The 'proper' way to write mystrlen would be to go through the string and count characters, but a function like:

    int mystrlen(const char *str) { if (strcmp(str, "hello") == 0) { return 5; } else if (strcmp(str, "goodbye") == 0) { return 7; } else { return 42; } }

    will also pass the test.

    IMV this is the flaw with TDD. Obviously no one would write the function as above (although, having read this site for years, I'm not entirely sure of that). But, with more complex functions it's not impossible that special cases are written just to pass the tests. rather than with a complete understanding.

    (You could say that a 'proper' TDD unit test for the above function should test more strings, but how far do you go? Do you test every possible string?)

  • (cs) in reply to faoileag
    faoileag:
    chubertdev:
    "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --Brian Kernighan The same is probably true of writing unit tests for your code.
    Even more so. You develop your code based on your assumptions. And you also develop your unit tests based on your assumptions. It is thus very easy to replicate an assumption-based error in your code in the unit tests. So the unit tests will not tell you that the code is wrong.

    Think leap years. A year is a leap year if it can be divided by 4 without remainder. Unless it can also be divided by 100 without remainder. But if you can divide it also by 400 without remainder, than all of a sudden it is a leap year agin.

    If you don't know the third rule, your isLeapYear(2000) will return false. But since you assume that to be the correct case anyway, you might test like assert_false(isLeapYear(2000)).

    A peer-review of your unit-tests might be the road to take here, as it would at least find the more obvious problems. But with complex algorithms I don't know.

    That's why I code/design unit tests strictly to the specs. If the specs leave out rule three, then 2000 won't show up as a leap year. When a user reports this, I respond "working as designed", log a bug, and close out the ticket.

    Sure, this is a cheap way of doing it, and you could probably say that some research on my part during the initial project would have prevented this, but the leap year example is just analagous to something far more complex designed by people with lots of acronyms after their names. It's just the nature of my field.

  • faoileag (unregistered) in reply to chubertdev
    chubertdev:
    That's why I code/design unit tests strictly to the specs. If the specs leave out rule three, then 2000 won't show up as a leap year. When a user reports this, I respond "working as designed", log a bug, and close out the ticket.
    Lucky you if the specs are covering everything down to the individual algorithms covered by the unit tests :-)

    The few "real" specs I have worked with so far were quite good, but usually on a level that you could build your user acceptance tests around them but not much more.

    "Best" spec so far has been "make it work like in the old version". So I did. Presented the new version. Got told that the data shown was wrong. Explained that I double checked the data against the old version and that it was the same. Got the answer "then the old version is broken at that point as well. Fix it in the new version".

  • (cs) in reply to faoileag
    faoileag:
    chubertdev:
    That's why I code/design unit tests strictly to the specs. If the specs leave out rule three, then 2000 won't show up as a leap year. When a user reports this, I respond "working as designed", log a bug, and close out the ticket.
    Lucky you if the specs are covering everything down to the individual algorithms covered by the unit tests :-)

    The few "real" specs I have worked with so far were quite good, but usually on a level that you could build your user acceptance tests around them but not much more.

    "Best" spec so far has been "make it work like in the old version". So I did. Presented the new version. Got told that the data shown was wrong. Explained that I double checked the data against the old version and that it was the same. Got the answer "then the old version is broken at that point as well. Fix it in the new version".

    Haha, that's still working as designed!

  • (cs) in reply to faoileag
    faoileag:
    That people agree with you does not necessariliy mean you are right.
    Yes! If any comment ever deserved to be blue, this does. Added to my sig.
  • Evan (unregistered) in reply to Rudolf
    Rudolf:
    IMV this is the flaw with TDD. Obviously no one would write the function as above (although, having read this site for years, I'm not entirely sure of that). But, with more complex functions it's not impossible that special cases are written just to pass the tests. rather than with a complete understanding.

    (You could say that a 'proper' TDD unit test for the above function should test more strings, but how far do you go? Do you test every possible string?)

    The goal of TDD is to get to the point where writing a bunch of special cases becomes more burdensome and annoying than just writing the thing.

    So maybe in your example you start out with the test

    ASSERT(mystrlen("hello")==5);

    Now how do you implement it? Easy.

    int mystrlen(const char * str) { return 5; }

    So then you say "I need a new test", and add "goodbye" and then maybe wind up with your "bad" example function (or something simpler; I'm thinking "return str[0]=='h' ? 5 : 7;"). So then you say "I need a new test", and on the third try you are too lazy to add a new special case and just write the for loop.

    TDD doesn't create any of these problems. You can never test that the function is written with complete understanding, nor do you ever know when to stop writing tests.

  • anonymous (unregistered) in reply to faoileag
    faoileag:
    "Best" spec so far has been "make it work like in the old version". So I did. Presented the new version. Got told that the data shown was wrong. Explained that I double checked the data against the old version and that it was the same. Got the answer "then the old version is broken at that point as well. Fix it in the new version".
    That's when you begin to sound like a broken record. "But the spec said..." "But the spec said..."

    Either they change the spec and you change your pricing, or they get exactly what their spec said.

  • Norman Diamond (unregistered) in reply to Rudolf
    Rudolf:
    (AIUI) What Jon is talking about is TDD. This is where the specification basically defines the behaviour. So, you write tests to check the behaviour, then write code to produce that behaviour.

    This does require some honesty from the developers. For instance, my unit tests for a simple string function could be:

    _ASSERT(mystrlen("hello") == 5);
    _ASSERT(mystrlen("goodbye") == 7);
    The 'proper' way to write mystrlen would be to go through the string and count characters, but a function like:
    int mystrlen(const char *str) {
    if (strcmp(str, "hello") == 0) return 5;
    else if (strcmp(str, "goodbye") == 0) return 7;
    else return 42;
    }
    will also pass the test.

    IMV this is the flaw with TDD. Obviously no one would write the function as above (although, having read this site for years, I'm not entirely sure of that). But, with more complex functions it's not impossible that special cases are written just to pass the tests. rather than with a complete understanding.

    People actually do write code like that to get high scores in benchmarks, or to make demos appear to work, or to meet deadlines. It's part of the infinite defects methodology that Joel Spoelsky wrote about.

  • (cs)

    I bet TRWTF was some idiotic requirement that all the automated tests pass within however many seconds per test, even if they were run with a client in San Francisco and a server in Tokyo.

  • algorithmics (unregistered)

    were there any code reviews done at all. surprising to see unit tests that seem so contrived. and typically database testing is done via automated integration testing - the old way of using mockdbs, while it does do the job, has proved to be far too cumbersome to be worth its usage.

  • (cs)

    The more that I think about the leap year example, the more I think that you're approaching it incorrectly. You don't really need to know the rules more than knowing that a leap year will have a February 29, and you let the language that you're working in take care of the rest:

        protected bool IsLeapYear(int year)
        {
            DateTime dt;
            return DateTime.TryParse(string.Format("2/29/{0}", year), out dt);
        }
    
  • anonymous (unregistered) in reply to chubertdev
    chubertdev:
    The more that I think about the leap year example, the more I think that you're approaching it incorrectly. You don't really need to know the rules more than knowing that a leap year will have a February 29, and you let the language that you're working in take care of the rest:
        protected bool IsLeapYear(int year)
        {
            DateTime dt;
            return DateTime.TryParse(string.Format("2/29/{0}", year), out dt);
        }
    
    I'm not sure I understand how checking whether the 2nd of Twentyninedruary exists will help you determine whether it's a leap year.
  • (cs)

    because 'Murica

  • anonymous (unregistered) in reply to chubertdev
    chubertdev:
    because 'Murica
    Sorry, but using strings to internally process dates or times is a Very Bad Idea.

    http://msdn.microsoft.com/en-us/library/ch92fbc1%28v=vs.110%29.aspx

    Because the DateTime.TryParse(String, DateTime) method tries to parse the string representation of a date and time using the formatting rules of the current culture, trying to parse a particular string across different cultures can either fail or return different results. If a specific date and time format will be parsed across different locales, use the DateTime.TryParse(String, IFormatProvider, DateTimeStyles, DateTime) method or one of the overloads of the TryParseExact method and provide a format specifier.
    What you should really use is DateTime.DaysInMonth(year, 2) == 29.

  • anonymous (unregistered) in reply to anonymous
    anonymous:
    chubertdev:
    because 'Murica
    Sorry, but using strings to internally process dates or times is a Very Bad Idea.

    http://msdn.microsoft.com/en-us/library/ch92fbc1%28v=vs.110%29.aspx

    Because the DateTime.TryParse(String, DateTime) method tries to parse the string representation of a date and time using the formatting rules of the current culture, trying to parse a particular string across different cultures can either fail or return different results. If a specific date and time format will be parsed across different locales, use the DateTime.TryParse(String, IFormatProvider, DateTimeStyles, DateTime) method or one of the overloads of the TryParseExact method and provide a format specifier.
    What you should really use is DateTime.DaysInMonth(year, 2) == 29.
    ...or, of course, DateTime.IsLeapYear(year)...

  • (cs)

    Truth, but as my main point was, it's best not to reinvent the wheel.

  • tom (unregistered)

    Testing Java's collections like this is key to keeping the language so reliable. On a related note, I've seen people heavily testing Mockito itself rather than the code they were writing.

  • (cs)

    The #3 reason for using MOCK objects during testing (according to wikipedia: http://en.wikipedia.org/wiki/Mock_object ) is if the original object is "slow (e.g. a complete database, which would have to be initialized before the test)".

    Which means that it's really the fault of the production manager for not ordering the correct tests to begin with. Gotta love pointy haired bosses trying to blame it on the workers.

  • (cs)

    Is it just me or is everyone missing, that this DAO can't be unit tested even with mocking? Connection and PreparedStatement are local variables of getData() method. How would you easily mock them?

    Oh, yeah create a mock of DriverManager.getConnection() or whatever is applicable if a connection pool is used. But you might also need to check the connection string. This whole construct may have issues itself and needs to be tested...and so forth.

    I don't see it. either use an in-memory DB and for java http://dbunit.sourceforge.net/ or leave it to integration testing.

  • (nodebb)

    Yes, it is laborious to write a book. The standard dissertation is the issue, in my opinion. I am unable to manage it alone. I always seek the assistance of experts since I require the dissertation help services possible, and I am confident that they will give my work the highest possible rating.

    Addendum 2022-10-25 15:13: Yes, it is laborious to write a book. The standard dissertation is the issue, in my opinion. I am unable to manage it alone. I always seek the assistance of experts since I require the dissertation help services possible, and I am confident that they will give my work the highest possible rating.

Leave a comment on “But the Tests Prove it Works Correctly!”

Log In or post as a guest

Replying to comment #:

« Return to Article