• _ (unregistered) in reply to blowhole
    blowhole:
    _:
    Here you go. Sorry, it's Python.

    seconds_jan_1_0000_to_jan_1_1970 = 62167132800 leap_year = (year % 4 == 0) and ((year % 100 != 0) or (year % 400 == 0)) seconds_since_jan_1_0000 = ((day - 1) * DAY) + (months [month - 1] * DAY) + (DAY if leap_year and month > 2 else 0) + (int (year * 365.2425) * DAY) return seconds_since_jan_1_0000 - seconds_jan_1_0000_to_jan_1_1970

    FUUUUU... 1970-year+3.

    Are you sure it works?

    What? I didn't say anything about "1970-year+3"...

    Am I sure it correctly converts to Unix time? Not completely, but fairly confident.

    Am I sure it facilitates correctly computing the difference in days between two dates? Yes.

  • Jack (unregistered) in reply to Anon
    Anon:
    Robert B.:
    Anon:
    And Banker's rounding is TRWTF!

    "Regular" rounding (round half-up) is asymmetric. Banker's rounding is symmetric. When you're dealing with billions of transactions a day worth trillions of dollars, that kind of matters. "Regular" rounding is naive and should probably never be used if calculation of monetary amounts is involved.

    Rubbish. Rounding .5 up is symmetric where as banker's "rounding" is totally arbitrary and stupid. You might as well flip a coin and decide whether you are going to round up or down.

    If we have a number 5.x:

    if x = 0 -> round down to 5 1 -> round down to 5 2 -> round down to 5 3 -> round down to 5 4 -> round down to 5 5 -> round up to 6 6 -> round up to 6 7 -> round up to 6 8 -> round up to 6 9 -> round up to 6

    So for 5/10 values of x, the answer is 5. For 5/10 values of x, the answer is 6. That is symmetric.

    Rounding up at .5 is not symmetric. 1.5 is exactly as close to 1 as it is to 2.

    1.0 -> 1 (-0.0) 1.1 -> 1 (-0.1) 1.2 -> 1 (-0.2) 1.3 -> 1 (-0.3) 1.4 -> 1 (-0.4) 1.5 -> 2 (+0.5) 1.6 -> 2 (+0.4) 1.7 -> 2 (+0.3) 1.8 -> 2 (+0.2) 1.9 -> 2 (+0.1) 2.0 -> 2 (-0.0) 2.1 -> 2 (-0.1) 2.2 -> 2 (-0.2) 2.3 -> 2 (-0.3) 2.4 -> 2 (-0.4) 2.5 -> 3 (+0.5) 2.6 -> 3 (+0.4) 2.7 -> 3 (+0.3) 2.8 -> 3 (+0.2) 2.9 -> 3 (+0.1)

    Add up the adjustments, and you get +1. If it was symmetric, it should be 0. In fact, if you use the "round half to even" rule (IEEE 745), then you do get 0, because the +0.5 at 1.5 and the -0.5 at 2.5 cancel each other out.

  • Tasty (unregistered) in reply to snoofle
    snoofle:
    ike:
    The customer *is* always right.
    They may be unreasonable, ignorant, clueless, stubborn, unrealistic, foolish, short-sighted and/or idiotic, but they are always right!

    Tell that to your doctor. We'll see how much longer you live.

  • blowhole (unregistered) in reply to _
    _:
    blowhole:
    _:
    Here you go. Sorry, it's Python.

    seconds_jan_1_0000_to_jan_1_1970 = 62167132800 leap_year = (year % 4 == 0) and ((year % 100 != 0) or (year % 400 == 0)) seconds_since_jan_1_0000 = ((day - 1) * DAY) + (months [month - 1] * DAY) + (DAY if leap_year and month > 2 else 0) + (int (year * 365.2425) * DAY) return seconds_since_jan_1_0000 - seconds_jan_1_0000_to_jan_1_1970

    FUUUUU... 1970-year+3.

    Are you sure it works?

    What? I didn't say anything about "1970-year+3"...

    Am I sure it correctly converts to Unix time? Not completely, but fairly confident.

    Am I sure it facilitates correctly computing the difference in days between two dates? Yes.

    That was a fix for my algorithm. Actually I think it's year-1970+3. My while is missing an ineffectual +1 as well.

    I like how yours theoretically supports unsigned unixtime by checking for leap centuries by the way and also offers some support for 64bit unixtime.

  • Coder (unregistered)

    There are two bugs in the conversion to the Unix Epoch:

    1. Leap years are not every 4 years. (http://en.wikipedia.org/wiki/Leap_year#Algorithm)

    2. It does not account for a leap year in the part of the algorithm that examines the day value due to months.

    There are also naming problems here. The function should be ConvertToUnixEpoch(). That would make things much clearer.

    I also agree that using the built in language functions would be a better choice.

    The principle of using the Unix Epoch for date calculations is sound however. The fact that it's on a Windows server shouldn't matter unless the program is providing the server with the date in that format.

    For those interested here's a discussion on how to more cleanly find the Unix Epoch in C#: http://stackoverflow.com/questions/906034/calculating-future-epoch-time-in-c-sharp

  • blowhole (unregistered) in reply to Coder

    I fixed it for him.

    return ((year-1970)*365+(year-1969>>2)+new int[]{0,31,59,90,120,151,181,212,243,273,304,334}[month-1]+day+(year%4==0&&month>2?0:-1))*86400;

    This one definitely works, I discovered csc.exe on my system and tested it.

    Who knew C# was so easy?

  • (cs) in reply to gr[a/e]y goat
    gr[a/e]y goat:
    Wait, have Microsoft and "looks pretty" ever really gone together?

    That got me, too.

  • (cs) in reply to Anon
    Anon:
    Rubbish. Rounding .5 up is symmetric where as banker's "rounding" is totally arbitrary and stupid. You might as well flip a coin and decide whether you are going to round up or down.
    Indeed, that is another way to do it. But generating a random number every time is enormously unpractical for a few reasons.
    Anon:
    And considering that rounding is about precision, you have to consider that there are unknown decimals beyond what you are looking at. So if we round to 0 d.p.:

    2.5000000000000000000000000001

    Using regular sensible rounding, you get 3. Using bankers rounding, you get 2. Clearly regular rounding is better because the number is closer to 3 than 2.

    NO, and wow you are retarded (unless you are trolling, but given what I've seen here you're probably not). Using banker's rounding you round it up because it's larger than 0.5. Period.

    And FYI IEEE 754 does take into account all the extra digits when rounding after each operation. So if 1.01000000000000000000001 (binary) has to be rounded, it rounds to 1.1. Believe it or not they actually did the math and chose the way that would give the smallest error.

  • El Oscuro (unregistered) in reply to RichP
    RichP:
    Nickster:
    emaN ruoY:
    This comment was designed to be Frist.

    Your comment does not function as it was designed. I will submit a bug report.

    Your bug report has been downgraded to a "feature enhancement request". Rationale: The comment does function as designed, which is to provide a commentary on the article. Desired ranking of Frist is noted, and will be slated for a future upgrade.

    What is the upgrade from frist? /dev/null or /dev/zero?

  • letatio (unregistered) in reply to El Oscuro
    El Oscuro:
    RichP:
    Nickster:
    emaN ruoY:
    This comment was designed to be Frist.

    Your comment does not function as it was designed. I will submit a bug report.

    Your bug report has been downgraded to a "feature enhancement request". Rationale: The comment does function as designed, which is to provide a commentary on the article. Desired ranking of Frist is noted, and will be slated for a future upgrade.

    What is the upgrade from frist? /dev/null or /dev/zero?
    Well, depending on the rounding it might be /dev/two.

  • Kasper (unregistered)

    Four pages of comments. And still nobody pointing out that their definition of YEAR is equal to 364 * DAY.

    The way their code computes the elapsed days is fairly efficient, but difficult to get right. The good thing I can say about it is, that it is easy to unit test. Writing code to iterate through every single date from 1700 to 3000 is fairly easy and will run in a split second on modern hardware. You just need to check the elapsed days since some fixed point in the past and verify that it increases by one every time you proceed to the next day.

    There is nothing wrong with code as clever as shown as long as you use enough unit testing to find and fix all the bugs. And in case your unit test finds the code to be flawless the first time you run it, that means there is a bug in the unit test.

  • gnasher729 (unregistered) in reply to Kasper
    Kasper:
    Four pages of comments. And still nobody pointing out that their definition of YEAR is equal to 364 * DAY.

    QFT.

    In my company, the constant YEAR wouldn't have passed a code review because (365 * 86400) is just as efficient as writing some huge number and a lot clearer. In other words, in a code review I wouldn't give a damn whether their number is right or wrong, the fact that I need a calculator to check it makes it a WTF.

    Well, the name "YEAR" wouldn't have passed a code review in the first place, because the name doesn't give any indication what it actually means. Setting year to 365 makes just as much sense. So it should have been SECONDS_PER_YEAR, for example.

    The rest of the code is obviously idiotic. Hint: In a correct calculation, 1st of March is usually 86400 seconds later than 28th of February, but sometimes it is 2 x 86400 seconds later. There must be something in the code to achieve this, and there isn't.

  • blowhole (unregistered) in reply to gnasher729

    The constant naming is bad and there should never be a constant for seconds per year because it isn't constant.

    However, when it comes to code the first thing that should concern anyone is whether it works or not. You're right on the constants but it's not by far the biggest issue or the hardest to spot and fix. It's not even the kind of fault that is daily WTF worthy.

    Kasper is correct that it should be unit tested. Except his proposed date range is too large for unixtime and I have a sneaky suspicion that for generating test dates you would be using library functions that either are the same as, or one step away from ones that are from those that you are testing. The first thing I would do to unit test it? Use a library that I can reasonably rely on to work correctly that does the same thing and compare results. Totally redundant.

  • Kasper (unregistered) in reply to blowhole
    blowhole:
    proposed date range is too large for unixtime
    As others have pointed out. You don't even need to work with such large numbers since the input and the final output are both going to be days anyway. So it would be totally possible to do the calculations in a way that does not overflow 32 bit integers and cover a large range of years.

    If you don't need the code to cover that many years, then feel free to leave it out of the unit test. Just keep in mind that code tends to never survive for exactly as long as you expected. Usually it will either never make it to production or outlive the expectation by an order of magnitude. If your unit tests only cover the range of dates that can be represented in 32 bit unix time, then you can use that inside your implementation and pass the unit tests.

    My point just was that it is completely feasible to cover such a large range in a unit test. And since the calendar repeats every 400 years (except from the phase of the moon), then it makes sense to cover a period a bit longer than that to ensure you didn't miss a corner case.

    blowhole:
    I have a sneaky suspicion that for generating test dates you would be using library functions that either are the same as, or one step away from ones that are from those that you are testing. The first thing I would do to unit test it? Use a library that I can reasonably rely on to work correctly that does the same thing and compare results. Totally redundant.
    That's not how I would do it. I would start the unit test with a fixed date far in the past. Then I would have one counter for the number of days tested so far, and another set of variables representing year, month, day of the month. Then at every itteration increase the counter as well as day of the month. Then followed by a few ifs to reset day of month and month to 1 and increase month and year.

    You'd be writing the code once for the actual routine and once for the unit test. But the version in the unit test needing only to increase from one date to the following date would be totally different and a lot easier to get right. The risk of making the same flaw in both of them, and having the code reviewer miss them both is tiny.

    I'd write the unit test code roughly like this. Given the different structure, the risk of introducing the same flaw in both is tiny. Even if I made a mistake in this unit test code, I'm confident that would also be revealed by running the unit test.

    int counter,year,month,day;
    year=1700;
    month=1;
    day=1;
    for (counter=0;year<3000;++counter) {
    assert(days(between(1700,1,1,year,month,day)==counter);
    ++day;
    if (day > 31) day=1;
    if ((day > 30) && (month == 4 || month == 6 || month == 9 || month == 11)) day=1;
    if (month == 2) {
    if (day > 29) day=1;
    if ((day == 29) && ((year % 4) || ((year % 400) && !(year % 100)))) day=1;
    }
    if (day == 1) ++month;
    if (month == 13) {
    month=1;
    ++year;
    }
    }
    assert(counter > (3000-1700)*365);

  • AN AMAZING CODER (unregistered) in reply to PiisAWheeL
    PiisAWheeL:
    AverageNewbie:
    cdosrun:
    noname:
    It does account for leap years: (((year - 1970)/4) * DAY)

    But because 2000 is not a leap year the generated date is one day off. As long as both dates are between 1.1.2001 and 31.12.2099 no one will notice.

    ((2000 - 1970)/4) = 7.5, no?

    (2000 - 1970) / 4 = 7

    year is an integer.

    Don't most of the developed countries round up at .5?

    holy shit, please stick to building WordPress blogs from here on out if you're not aware how computers handle decimals.

  • blowhole (unregistered) in reply to AN AMAZING CODER
    AN AMAZING CODER:
    PiisAWheeL:
    AverageNewbie:
    cdosrun:
    noname:
    It does account for leap years: (((year - 1970)/4) * DAY)

    But because 2000 is not a leap year the generated date is one day off. As long as both dates are between 1.1.2001 and 31.12.2099 no one will notice.

    ((2000 - 1970)/4) = 7.5, no?

    (2000 - 1970) / 4 = 7

    year is an integer.

    Don't most of the developed countries round up at .5?

    holy shit, please stick to building WordPress blogs from here on out if you're not aware how computers handle decimals.

    It sucks not to understand what goes on in integer division. I wonder if some people are confused who have only used loosely typed languages. In that case a division between two integers can result in a floating point result. But yes, wheel's comment is just daft.

  • Simon (unregistered) in reply to Scythe
    Scythe:
    Not to mention that the /4 division could have been handled through bit shift;)

    If I saw a developer using bit shifting for multiplication in C#, I'd have them fired. Out of a cannon.

    There's no reason for that kind of confusion-inducing performance optimisation tricks in a modern app where the compiler can inevitably do a better job of it than the average code monkey (and most of the higher primates, too)

  • Simon (unregistered) in reply to catastrophic
    catastrophic:
    And if you ever revisit code you wrote 2+ years ago you will likely find plenty of ugly stuff that you would do differently if you were the original coder.
    "Who's the idiot who wrote this?"

    (checks the file history)

    "Oh. Never mind..."

  • Meh! (unregistered) in reply to Franky
    Franky:
    hymie:
    noname:
    But because 2000 is not a leap year

    Bzzzt. Thanks for playing.

    For all those who still don't get leap years: every 4 years, except every 100 years, except every 400 years. So 1900 wasn't a leap year, but 2000 was as you can divide by 400 (seriously, we had to implement this in every language we learned at school as one of the first exercises)

    Yeah, it's bizarre how many people fail to understand the century year criteria for a leap year.

    I usually take the time to explain the reasons for it. The Julian calendar assumed a solar year length of 365 and a quarter days and so had a leap year every 4 years wihout exception but the solar years is actually 11 minutes less. Which may not seem like much but it accounts for 3 days every 4 centuries. Hence the Gregorian reform ensured that only 1 century year in 4 is a leap year.

    The Julian calendar still follows the old rule and because it has more Feb 29ths inserted it's now 13 days "behind" the Gregorian calendar.

  • temotodochi (unregistered)

    Using epoch seems really like a nice idea to circumvent problems in precisely timed transactions over timezones.

  • blowhole (unregistered) in reply to Simon
    Simon:
    Scythe:
    Not to mention that the /4 division could have been handled through bit shift;)

    If I saw a developer using bit shifting for multiplication in C#, I'd have them fired. Out of a cannon.

    There's no reason for that kind of confusion-inducing performance optimisation tricks in a modern app where the compiler can inevitably do a better job of it than the average code monkey (and most of the higher primates, too)

    I used it because it's more portable, although if you really want your algorithm to be truly portable you would explicitly floor.

  • Onan (unregistered) in reply to PiisAWheeL
    PiisAWheeL:
    AverageNewbie:
    cdosrun:
    noname:
    It does account for leap years: (((year - 1970)/4) * DAY)

    But because 2000 is not a leap year the generated date is one day off. As long as both dates are between 1.1.2001 and 31.12.2099 no one will notice.

    ((2000 - 1970)/4) = 7.5, no?

    (2000 - 1970) / 4 = 7

    year is an integer.

    Don't most of the developed countries round up at .5?

    Greece can't afford to any more

  • OldBoy (unregistered) in reply to Sannois
    Sannois:
    n_slash_a:
    Part of our coding standard is that no line can be over 80 characters. This allows 1) the code to be printed out

    Help! Help! I've fallen through a wormhole and ended up in 1996!

    1976 - I was using a 150 column printer attached to a mainframe in the very early 1980s

  • OldBoy (unregistered) in reply to blowhole
    blowhole:
    Simon:
    Scythe:
    Not to mention that the /4 division could have been handled through bit shift;)

    If I saw a developer using bit shifting for multiplication in C#, I'd have them fired. Out of a cannon.

    There's no reason for that kind of confusion-inducing performance optimisation tricks in a modern app where the compiler can inevitably do a better job of it than the average code monkey (and most of the higher primates, too)

    I used it because it's more portable, although if you really want your algorithm to be truly portable you would explicitly floor.

    More portable for those systems that don't support multiplication.

  • Reply to TRWTF (unregistered) in reply to PiisAWheeL

    Big difference between rounding and truncation.

    Double to Int uses truncation. That is it chops off the decimal part.

  • Reply to TRWTF (unregistered) in reply to n_slash_a
    n_slash_a:
    Part of our coding standard is that no line can be over 80 characters. This allows 1) the code to be printed out and 2) the ability to have other windows or tool-bars open next to the code.

    Two things.

    1. Laptops - no more printing code.
    2. Scrollbars - no more column counting.

    Who reviews printed code anymore???

  • Kjella (unregistered)

    Just my 0.02$ on this story, nothing says that he actually found and/or reported that the code was buggy in any way. Just saying "this code could be simpler and better using the built-in date functions" isn't a bug report, at least not unless you've put something very specific in the contract. If I ever tried to outsource something I'd probably try to include something like:

    "Reimplementing functionality already present in the stanadard library shall be concidered a C-level bug (with A being critical, B major, C minor). This includes but is not limited to:

    • Creating your own date calculations
    • Writing your own XML parser or creating XML directly from strings.
    • Whatever else WTF I could list"

    That way I could at least point to it and say this isn't just poor code, we've agreed doing this is a bug. Don't know if they'd accept it, but I'd be afraid to sign on any company that clearly wouldn't.

  • Kirby L. Wallace (unregistered)

    I'd have to say I agree with the designers.

  • Paul Neumann (unregistered) in reply to ObiWayneKenobi
    ObiWayneKenobi:
    ParkinT:
    "If it ain't broke, don't fix it" So much for Code Excellence!

    I DESPISE that quote and the associated mentality with the fire of a thousand suns. Something can be "broke" and still appear to be working properly; doesn't mean it isn't broken and should be ignored.

    This mentality is the reason there is so much shitty software out there (and the reason this site exists) because people are so reluctant to mercilessly refactor code and follow good craftsmanship principles under this false pretense.

    The greatest lie in business is "The customer is always right". The second is "If it ain't broke, don't fix it".

    So if it ain't broke, redefine broken?

  • Vlad (unregistered) in reply to ObiWayneKenobi

    The reason it's not done much? There's reasons, I'll start off with two.

    1. To most management, programmers==extra desktop/tier 1 phone support staff. There's a fsckhueg demand for desktop support because users are clueless. And if the users can't squeeze an obvious desktop support task through to someone who was required to have several years of programming experience just to get an interview, they simply submit the problem report as "Application not working" and then the phone call goes directly through to a programmer who has to stop what he's doing, walk to the user's desk and adjust the monitor resolution, knock desk crud off the bottom of the mouse or empty the cheedle out of their keyboard.
    2. To most management, programmers who aren't doing phone support and aren't coding in new features the sales team promised that don't already exist are wasting time and time equals money, and they don't need to be wasting money. Maybe the code can be improved, but that also has to be tested, and that's even more money that's being wasted. Why are you insisting that wasting money is a good idea?

    I could probably think of more, but it's the end of the day and I'm mostly spent from arguing with users.

  • Vlad (unregistered) in reply to ObiWayneKenobi
    ObiWayneKenobi:
    ParkinT:
    "If it ain't broke, don't fix it" So much for Code Excellence!

    I DESPISE that quote and the associated mentality with the fire of a thousand suns. Something can be "broke" and still appear to be working properly; doesn't mean it isn't broken and should be ignored.

    This mentality is the reason there is so much shitty software out there (and the reason this site exists) because people are so reluctant to mercilessly refactor code and follow good craftsmanship principles under this false pretense.

    The greatest lie in business is "The customer is always right". The second is "If it ain't broke, don't fix it".

    One of the other reasons that it's not done much, is that there is simply so much shitty code, and it's everywhere. EVERYWHERE.

    Why? Because anyone and everyone can be a programmer right? And who wouldn't want to be a programmer?

  • Uri (unregistered)

    Nobody seemed to notice: public const int YEAR = 31449600;

    That's 364 days a year. Good thing it isn't use...

  • chris hanlin (unregistered)

    Unsubscribe me!!!

Leave a comment on “Epoch Billing System”

Log In or post as a guest

Replying to comment #:

« Return to Article