• Pablo (unregistered)

    Hell yeah, let the time put each guy on its place!

  • (cs)

    But then why do they keep saying that offloading things to the DB is quicker than using page scripts?

  • Zan (unregistered)

    I count a wtf-line/line ratio of 0'666, not so bad.

    (the wtf/line must be around 2, I guess)

  • White Echo (unregistered)

    The worst part is I am sure the programmer is proud of himself.

    Unfortunately there are many many bad programmers like that out there and their employers do not always realize they suck ever.

  • C User (unregistered)

    Puh-leaze. Get rid of C? Are you insane? Go back to Visual Basic, where life is simple, straightforward, and utterly inane and boring!

    The header file you mention--complex.h--describes complex numbers and not that C is complex. Although I suppose for the small of mind and intelligence C is complex.

  • C programmer with a sense of humor (unregistered) in reply to C User

    I think the original poster is well aware of what complex.h is... ;)

  • Glenn Lasher (unregistered)

    See, now, the real WTF is that he should have used Class::DBI.

    /me ducks

  • (cs) in reply to C User

    I C dead people ...

    That's what I think I saw after reading that code ...

    I've seen a guy who has written far worse code in VB to do things the language was not meant to do. Some of his "libraries" look somewhat like this.

    I wish I had that source code. Most of it would be WTF-worthy... like the non-LIFO "stack" class.

  • Ron (unregistered)
    chop($NOW=`date +%\Y\m%\d`)

    Is somebody suggesting it's actually a good idea to assume the OS has a "date" function, and to use it like that?

  • (cs) in reply to C User
    C User:
    Puh-leaze. Get rid of C? Are you insane? Go back to Visual Basic, where life is simple, straightforward, and utterly inane and boring!

    The header file you mention--complex.h--describes complex numbers and not that C is complex. Although I suppose for the small of mind and intelligence C is complex.

    Wow. I'm speechless. Did you even read past the first line?

  • Ryan (unregistered)

    This actually makes some sense if the application can't rely on the application server and database server running with synchronized clocks (or even in the same time zone). The database clock is the "master" clock of the appliction. It's a poor man's application-level NTP.

    It would, of course, be nice to require that all machines have UTC-synchronized clocks and have their time zones properly set. If that's not possible, it would make sense to get the time once an hour from the DB server and cache an offset to the local clock (in milliseconds) to calculate timespamps locally.

  • zbigg (unregistered)

    I bet he had problems with timezones. DB was set to some timezone, but his script ran in other and to do some calculations he had to synchronize.

    Its WTF ... but it really very hard to know threee things:

    • what is timezone
    • how to change/use/ timezone
    • what timezone i should use

    ;).

  • Dave (not that one) (unregistered)

    Let's say you have a database server in Oklahoma, but it is queried by clients all over the world. A user in Tokyo creates a transaction that includes a date, and writes it to the database. A user in New York later reads that transaction and needs to know the time it occurred, in the New York time zone. How would you store and manipulate date/time in the database to ensure this could be done? Assume there are no stored procedures, and client code needs to determine the time.

    Hint: It may be useful to know the time on the SQL Server.

  • MacNugget (unregistered) in reply to Ron
    C User:
    Puh-leaze. Get rid of C? Are you insane?
    Ron:
    Is somebody suggesting it's actually a good idea to assume the OS has a "date" function, and to use it like that?

    satire The use of irony, sarcasm, ridicule, or the like, in exposing, denouncing, or deriding vice, folly, etc.

  • zbigg (unregistered)

    LOL. 3 timezone related posts in one minute period... it must be telepathy ;)

  • Animator (unregistered)

    The real WTF is using chop instead of chomp!

  • n (unregistered)

    this reminds me of a story my boss likes to tell me occasionally. (each time I pretend not to have heard it before.) Apparently once upon a time a new hire was told to write a perl script that logged to a file. When it was running in production, it started taking up vast amounts of CPU for a small task. It took them a while to figure out what was going on, but eventually it became clear that it was calling

    date
    every time it logged.

  • Troubled porter of code (unregistered)

    This reminds me of some php code I inherited last year.

    $check=` ls -lt /var/apache/htdocs | grep pingdata | awk ' { print $6,$7,$8 } '`;
    echo "Printer Stats as of  $check EST";

    They should invent a file modification time function for php.

  • Anonymous (unregistered) in reply to Dave (not that one)

    That's why time should always be handled and manipulated in UTC (GMT) time. Time zones are strictly a user interface issue, or should be.

  • AnonY (unregistered)

    Has anyone tried to run

    date +%Ym%d
    from the cmd line (assuming that you have a system with the command on it)? Well on my mac it gives me 2007m18. m is always my favorite month of the year :)

  • wade b (unregistered) in reply to Dave (not that one)

    I don't know Perl so I can't comment if that code is written correctly.

    But I always allow the database server to generate date/time values.

    I don't really have to explain to you all why that is a good idea, do I?

    Dave mentionned one very valid reason.

    The other is that we often don't have control over the client workstations accessing the database; therefore synchronizing their clocks with NTP is not an option.

    Unless this code is badly written, the concept is extremely valid and is actually a "best practice" as far as I'm concerned.

    Go ahead and try to convince me otherwise - I'll still do things this way.

    Date/times are extremely important in transactional environments so you'd best get those values from a known stable clock.

    Why fool with sync'ing many client clocks (which can still go wrong if user has hosed the NTP service or if the client network is acting up.)

    Show me the WTF please?

  • wade b (unregistered) in reply to Anonymous
    That's why time should always be handled and manipulated in UTC (GMT) time. Time zones are strictly a user interface issue, or should be.

    Wrong-o. See my post above.

    You realize the user can set the time to anything they want, right?

    In a transaction environment, you ALWAYS want to get the time from the server, from a clock that you maintain and control.

    This solution is nice, easy, predictable, and controllable.

    Why make things tougher than they have to be?

  • Brent Ashley (unregistered)

    You're saying time and/or localtime would have been a better alternative?

    So tell me, how exactly do you get the database time on a remote db server using time and localtime?

    Have you considered an app that uses a db on another server?

    Have you considered a db instance that's set to a different timezone than the server it's on?

    WTF?

  • (cs) in reply to Ryan

    Application time, database time, but with user-centered design, what matters is the user's time. Use a web cam and artificial vision to read the user's wristwatch. --Rank

  • Ron (unregistered) in reply to MacNugget
    MacNugget:
    Ron:
    Is somebody suggesting it's actually a good idea to assume the OS has a "date" function, and to use it like that?

    satire The use of irony, sarcasm, ridicule, or the like, in exposing, denouncing, or deriding vice, folly, etc.

    I re-read that line a couple times before posting, and he sounded pretty serious.

    Sometimes sarcasm doesn't quite come across over the Internet properly, but he sounded like he was suggesting that's the easiest way to do it, not making fun of that way.

  • Some Dude (unregistered) in reply to Dave (not that one)
    Dave (not that one):
    Let's say you have a database server in Oklahoma, but it is queried by clients all over the world. A user in Tokyo creates a transaction that includes a date, and writes it to the database. A user in New York later reads that transaction and needs to know the time it occurred, in the New York time zone. How would you store and manipulate date/time in the database to ensure this could be done? Assume there are no stored procedures, and client code needs to determine the time.

    Hint: It may be useful to know the time on the SQL Server.

    Just use UTC and be done with it. Solutions need not be overly complex.

  • Dr. Whoski (unregistered)

    In Soviet Russia, time synchronises YOU!

  • (cs)
    There's ever a header file called complex.h, if you need proof.
    That's cute.

    On a related note, the WTFs here are complex: they have both real and imaginary parts...

  • JMC (unregistered)

    are you insane?? most of the programs you use on a day-to-day basis are written in C...

  • Winter (unregistered) in reply to wade b
    wade b:
    I don't know Perl so I can't comment if that code is written correctly.

    But I always allow the database server to generate date/time values.

    I don't really have to explain to you all why that is a good idea, do I?

    Dave mentionned one very valid reason.

    The other is that we often don't have control over the client workstations accessing the database; therefore synchronizing their clocks with NTP is not an option.

    Unless this code is badly written, the concept is extremely valid and is actually a "best practice" as far as I'm concerned.

    Go ahead and try to convince me otherwise - I'll still do things this way.

    Date/times are extremely important in transactional environments so you'd best get those values from a known stable clock.

    Why fool with sync'ing many client clocks (which can still go wrong if user has hosed the NTP service or if the client network is acting up.)

    Show me the WTF please?

    I vote we make this whole post the new WTF.

    Especially this bit:

    wade b:
    Go ahead and try to convince me otherwise - I'll still do things this way.

    That is how a lot of these WTFs come to be.

  • (cs)

    what a wish! it' won't happen though as C is a foundation laying language to introduce noobs to programming.

    You want them to start thinking correctly about programming you have to start somewhere and C is perfect.

  • John (unregistered)

    Right, he was explaining that that is the easiest way to do something stupid.

  • THC (unregistered)

    He needs a large table and he could even move the bad old '+' into good SQL .. "SELECT SUM(id) FROM tbl WHERE id='1' OR '3' "

  • Milkshake (unregistered) in reply to Ghost Ware Wizard
    Ghost Ware Wizard:
    what a wish! it' won't happen though as C is a foundation laying language to introduce noobs to programming.

    You want them to start thinking correctly about programming you have to start somewhere and C is perfect.

    Besides, there's no time to replace it. We're all too busy eating babies to control the overpopulation problem.

  • PS (unregistered) in reply to Ghost Ware Wizard
    Ghost Ware Wizard:
    what a wish! it' won't happen though as C is a foundation laying language to introduce noobs to programming.

    You want them to start thinking correctly about programming you have to start somewhere and C is perfect.

    Exactly! That's why unis are switching to Java :p

  • (unregistered)

    Not a WTF at all!! reading the time from server it's a best Practice for me, if we are talking about client-server code!!!!

  • real_aardvark (unregistered) in reply to wade b
    wade b:
    That's why time should always be handled and manipulated in UTC (GMT) time. Time zones are strictly a user interface issue, or should be.

    Wrong-o. See my post above.

    You realize the user can set the time to anything they want, right?

    In a transaction environment, you ALWAYS want to get the time from the server, from a clock that you maintain and control.

    This solution is nice, easy, predictable, and controllable.

    Why make things tougher than they have to be?

    Right-o. Hang on, that means something different in English English. Never mind, I'lll use it both ways.

    Only to a (not very good) DB Admin is this solution nice, easy, predictable and controllable. It certainly isn't nice, and it sure ain't easy: you have to understand perl and T-SQL even to grab a hint at what it's doing. I suppose it's predictable, in that it has no side-effects, but "predictable" is not a sufficient prerequisite for "good". It's also controllable, in the same way that one can always control segmentation faults by dereferencing a null pointer.

    The original poster (whose name you've chopped, or possibly chomped) is correct. If you are going to do this sort of thing, then store a UTC value (or equivalent, if you need dates before 1970 or after 2038). It is up to the user interface part of the code to convert this to the correct time-zone, locale, format, etc.

    Storing hard-coded date-time strings is a ludicrous way to synchronise transactions across a network. Apart from anything else, with UTC you can instantly see which timestamp precedes which. Try that with a string ...

    As usual with WTFs, we have to guess as to the environment and the requirements. There is no suggestion here that any sort of distributed service is involved -- indeed, the simultaneous use of the $NOW trick suggests that there isn't one. I get a strong whiff of cut'n'paste here: it looks like the programmer is assembling new code from two different sources: one that used that horrible database trick, and the other that used an equally horrible perl/system call trick.

    And the perl that calls out to the database is truly disgusting.

    Naked 101s are for Big Brother only, not for everyday use.

    Grabbing a reference to a cursor (thus the @$ on the lhs), then iterating through it when there is only one sensible row, is laughable. And obfuscatory.

    For those who wish to know, btw:

    Sybase documentation:

    Sample Date Format


    04/05/2000 101

  • Animator (unregistered) in reply to wade b

    The thing is, this isn't about transactions. If you need a stable clock then you should lookup the time when you use it in the query.

    As in, instead of first selecting it into your own code and in an own variable put it in the query you actually need in.

    That is, not insert some_table (some_field) values ($some-time) but use insert into some_table (some_field) values (getdate()) (or whatever you want.)

    If it is just for displaying/setting defaults on a form/... Then querying the database is just plain silly and creates extra, unneeded overhead. Use the local clock.

  • Falkan (unregistered)

    Tha last great work written in C is Schuberts 8th Symphony. ;-)

  • wade b (unregistered) in reply to Winter
    I vote we make this whole post the new WTF.

    Especially this bit:

    what have you added to the discussion?

    I clearly stated I do not know Perl. Other posters have pointed out how badly implemented the Perl code is.

    That is not what I was talking about.

    I merely said that if you want a good, stable clock source you get it from the server.

    And yes, you don't want to create a round-trip just to get the time.

    That is elementary.

    UTC or not, If you need accurate, stable time you get it from a clock source that YOU control.

    Capiche?

  • wade b (unregistered) in reply to real_aardvark
    The original poster (whose name you've chopped, or possibly chomped) is correct. If you are going to do this sort of thing, then store a UTC value (or equivalent, if you need dates before 1970 or after 2038). It is up to the user interface part of the code to convert this to the correct time-zone, locale, format, etc.

    I don't disagree with you at all. Again, I don't know Perl.

    I had no idea the guy was storing a date/time as a string in the DB.

    That's just horrible and violates domain integrity in the same way as storing an integer value in a varchar does.

    My post was aimed more at the people who were making comments about acquiring the time from the client.

    In that context, I stand by what I have said.

  • Elephant (unregistered)

    In reply to the "just use UTC" guys: That works part of the time. A problem situation is with appointments scheduled to the future. You schedule an appointment to October 10th, 9:00AM, six months in advance. In July, your government decides that this year, daylight savings time starts on October 1st rather than October 15th as it has been.

    If you stored it in UTC, your appointment just got rescheduled.

  • Still coding in C++ (unregistered) in reply to AnonY

    I also get 2007m18 when running date +%Ym%d in MinGW.

    <quote> Has anyone tried to run

    date +%Ym%d

    from the cmd line (assuming that you have a system with the command on it)? Well on my mac it gives me 2007m18. m is always my favorite month of the year :)

  • Dave (not that one) (unregistered)

    UTC or not, If you need accurate, stable time you get it from a clock source that YOU control.

    Those of you advocating UTC are correct that it is the same value all over the world right now. As Wade says, though, how do you have any confidence in the client's time and its relation to the server's time? Even if the server's time is off by a bit, it will be consistently off rather than randomly and unpredicably off as you'll get with client times.

    By getting the server time, the client can use it as a benchmark to adjust the times it reports to the server. Obviously, if the time represents the time of the transaction the best thing to do is let the server insert the time--assuming your server can do that. Depending on the situation, the code in the original post may have been the best solution to the clock problem.

  • not so sure (unregistered) in reply to Pablo

    Time to defecate...

  • devnull (unregistered) in reply to Still coding in C++

    I think you wanted "date +%Y%m%d" instead.

  • steve (unregistered)

    The real WTF is the quality of comments on this post and that I read them all.

    There is a lot of context missing in this example - and I agree with Wade for the purpose of a trusted clock that getting the date from the server is the right way to proceed.

    A better implementation though would be to embed getdate() function into all your stored procedures so that it automatically handles all your date inserting/updating. This way you will never need to bring it into the perl/c/java world except for display purposes and it won't lead to the confusion which is around this post.

  • (cs) in reply to Ryan
    Ryan:
    This actually makes some sense if the application can't rely on the application server and database server running with synchronized clocks (or even in the same time zone). The database clock is the "master" clock of the appliction. It's a poor man's application-level NTP.

    It would, of course, be nice to require that all machines have UTC-synchronized clocks and have their time zones properly set. If that's not possible, it would make sense to get the time once an hour from the DB server and cache an offset to the local clock (in milliseconds) to calculate timespamps locally.

    I've seen a lot of shops where the time synchronization was all over the place (means not existing). It became better with win2k3/xp/2k-environments but then they often forgot the Unix/Linux-servers .... where the databases were running. So, IMHO, the second code snippet makes sense in a weird way ...

  • Sleepy Programmer (unregistered)

    I once worked with a high-priced Oracle consultant, he was actually really good but had a "hands off" approach (meaning that he never wanted to touch a keyboard and would dictate typing during the initial phase and then once he gaged that the developer knew what they were doing would change to a more task-oriented approach).

    During my initial time spent with him, he asked me to open SQLPlus and type "SELECT (500 / 15) * 100 FROM dual" - or some such numbers, basically using Oracle as a calculator. Besides just being able to do the math in my head, I commented that I had a calculator, or I could just use perl. He wasn't so amused.

    Captcha: waffles (mmmmm. waffles)

  • betaray (unregistered) in reply to Elephant
    Elephant:
    your government decides that this year, daylight savings time starts on October 1st rather than October 15th as it has been.

    This is quite an unusual situation, but I would like to know how you would solve this without needing to change your code to reflect the new daylight savings rules? If you store your values in UTC/GMT, you're right the appointment will occur at the wrong time if the system is unable to determine the correct local time, but the reverse is also true. If you store the time as local time, but are unable to convert local time to system time correctly, the appointment will occur at the wrong time, but in the other direction.

    I have always been a fan of storing things in UTC and converting back to local time for display purposes. The situation you describe, where the relation ship between local time and universal time, changes is it's one weak point.

    In this case if you have something scheduled in the future and your relationship changes you will have to update all of your UTC times in the future, and you will now have two sets of rules for converting universal time into local time. If you store things as local time you need to jump through all sorts of hoops to insure you're getting the correct differences in times. Did the task that started on 2:00am on March 11th and finished at 2:01am on March 11 take 1 minute to complete, or did it take 1 hour and 1 minute? Woe be unto those that want to find the difference between the local time between two timezones without converting to UTC.

    To summarize, dalight savings time sucks.

    I thought I would also mention, UTC/GMT is a time zone, and is not the same as [url=http://en.wikipedia.org/wiki/Unixtime]unix time[/url[ which is the count of seconds since 00:00:00 UTC on January 1, 1970 and on 32 bit machines will end in 2038.

Leave a comment on “Time to Deprecate”

Log In or post as a guest

Replying to comment #112093:

« Return to Article