• (nodebb)

    When I come to a clock store, I can see many clocks per second, but not a 1000. Maybe, 14?

  • TopTension (unregistered)

    I fail to see this as a WTF. In my opinion it is good practice to use the CLOCKS_PER_SEC define, that's why it was provided. Otherwise you would code to the ASSUMPTION, that there are 1000 clock ticks to the second. Even if the behaviour of clock() probably won't change, it is documented in CLOCKS_PER_SEC, so it should be used.

  • Michael P (unregistered)

    I agree with TopTension. clock() is a standard C function, and CLOCKS_PER_SEC is not even guaranteed to be constant across processes on a system (by C; some standards constrain it, like POSIX defining it to always be 1000000).

  • (nodebb)

    Of course nothing stops you doing something like this:

    #if CLOCKS_PER_SEC == 1000
       // code without the divide and multiply
    #else
       // code with the divide and multiply
    #endif
    

    Addendum 2023-03-22 07:25: Well, unless it's a per-process variable, in which case the #if won't compile...

  • Prime Mover (unregistered)

    Define

    MILLISECONDS_PER_SECOND = 1000
    and you're golden.

  • Jajcus (unregistered)

    The function is used as documented. The 1000 / 1000 part will be optimized out by the compiler. No WTF here.

    WTF would be if they didn't use the CLOCKS_PER_SEC constant and the wonder why it fails to work on a different platform.

  • (nodebb) in reply to TopTension

    Indeed; it could use a high performance counter or based on CPU clock cycles or weird other internal oscillators. That's by the way why it was defined back in the days in the first place, there were dozens of different implementations for clock() depending on the hardware the program was running :-)

  • (nodebb)

    Looking at "man 3 clock" (https://man7.org/linux/man-pages/man3/clock.3.html) the original author does exactly what the user manual tells him to do.

    Most likely clock() does not return milliseconds, it returns the number of OS ticks that have passed since boot. MS Windows happens to have a tick length of 1 ms which is fortunate. Other OS-s (or even Windows on other plaforms) are not guaranteed to use a 1 ms tick.

  • (nodebb)

    Instead of dividing by CLOCKS_PER_SEC and multiplying by 1000 (MS_PER_SEC), they should define and use a constant CLOCKS_PER_MS, set to (at compile time) CLOCKS_PER_SEC / 1000, and in this case, equals 1.

  • (nodebb) in reply to Michael P

    I agree with lots of other people. This is how clock() is documented on my (macOS) system:

    The clock() function determines the amount of processor time used since the invocation of the calling process, measured in CLOCKS_PER_SECs of a second.

    I'd write the line of code in exactly the same way, except I'd define a constant to eliminate the hard coded 1000 as suggested by prueg.

  • (nodebb) in reply to Steve_The_Cynic

    unless it's a per-process variable

    Emphasis there on "variable". The C11 standard defines it thus:

    expands to an expression with type clock_t (described below) that is the number per second of the value returned by the clock function.

    It doesn't even say "constant expression". By the C standard, there's no guarantee that two usages of it will have the same value. There's nothing (in the C standard) to say a C library can't implement a function that allows you to change CLOCKS_PER_SEC.

  • (nodebb)

    Made something similar myself once. Only it wasn't about system clock, it was about sampling rate of some data. That piece was vital, because some processing functions worked with sampling points, while other only knew seconds or milliseconds - these all were automatically converted when working through GUI, but coding a macro was a pain in the ass.

  • Barry Margolin (github) in reply to prueg

    CLOCKS_PER_SEC / 1000 could run into round-off error. It's better to keep the division and multiplication separate.

  • ZZartin (unregistered)

    I'm willing to best that this is one of the mildest things that would break at this point if they ever try to update to a newer version with different behavior for some of those constants/functions.

  • (nodebb) in reply to Barry Margolin

    It's not a big problem when you make it a constant with a double value.

  • FTB (unregistered) in reply to Jeremy Pereira

    TRWTF is all the people proposing "#define MS_PER_SEC 1000"

    Will the second ever have a different number of milliseconds in it?

    Defining it as a symbol just gives your brain something else to read and parse whereas 1000 is easily understood as "milli".

  • TheCPUWizard (unregistered)

    I remember when clocks was tied to the LTC [AC mains frequency] on certain systems... So CLOCKS_PER_SEC was 50 or 60 depending on compilation local < ahh fun days of early 1980's embedded systems]

  • (nodebb)

    And of course all of us (including me) proposing to remove the divide-and-multiply if CLOCKS_PER_SEC is exactly 1000 are wrong, because it changes the semantics of the expression by eliminating the round-down aspect.

  • Foo AKA Fooo (unregistered) in reply to dkf

    It is a problem if you just declare the constant as double since type conversions happens after the integer division. You'd need CLOCKS_PER_SEC / 1000.0 or an explicit cast before the division. And, as has been mentioned, it's not necessarily a constant. You could write a function returning this value. Yeah, that's so much better than the original code.

  • Foo AKA Fooo (unregistered) in reply to Steve_The_Cynic

    Nope, there's a cast to double before the division.

    Wow, so that's 3 Non-WTFs in the posted code, compared to the article and comments. Must be a new record!

  • (nodebb)

    Yup, not a WTF, that's how CLOCKS_PER_SEC is supposed to be used.

  • Fijiaaron (unregistered)
    Comment held for moderation.
  • Craig (unregistered)

    Re the easy-reader note, I do make a practice of including units in the variable name when the value is morally dimensioned but is a floating-point var for whatever reason. It happens semi-frequently in engineering software.

Leave a comment on “All in the Timing”

Log In or post as a guest

Replying to comment #:

« Return to Article