• (cs) in reply to Bustaz Kool

    I know! I know! I know!
    Ooooh, I have the answer!

    He should AVOID dividing. Dividing is just inverse multiplying!

    milliseconds * 0,0000166666666666667 = Minutes

    No negative/positive checks required, and no chance of encountering "DivideByZeroExecption"!

    [:D][H]

  • (cs) in reply to RobIII
    RobIII:

    I know! I know! I know!
    Ooooh, I have the answer!

    He should AVOID dividing. Dividing is just inverse multiplying!

    milliseconds * 0,0000166666666666667 = Minutes

    No negative/positive checks required, and no chance of encountering "DivideByZeroExecption"!

    [:D][H]


    That might work in your locale, but I think your octal constant has too many digits.  <BEG>

    Sincerely,

    Gene Wirchenko

  • (cs) in reply to ABAP

    Anonymous:
    > Except integer division doesn't round, it truncates, and will therefore always go towards zero anyways.

    Depends on the "programming language"...

    No, it depends on the hardware. All of which I've seen so far that have a divide instruction will truncate.

  • (cs) in reply to Lews

    Lews:
    Anonymous:
    Assuming there are more than ten additional bits in a long data type compared to an int data type, there's also an overflow here.
    On x86 (I doubt the coder was using something else), longs and ints are both 4 bytes long.

    According to the official C spec, ints are supposed to be 2 bytes (16 bits), longs are 4 (32), and long long are 8 (64)...

  • zorro (unregistered) in reply to llxx

    How about this to convert millis in minutes:

    int millis2minutes(int millis) { ....int min = 0; ....int cnt = 0; ....int factor = 1; ....// handle special 0 case ....if (millis = 0) ........return min; ....if (millis < 0) { ........factor *= -1; ....} ....millis = factor; ....while(millis-- >0) { ........cnt += 1; ........if (cnt == 60000) { ............min += 1; ............cnt = 0; ........} ....} ....return factormin; }

    Just trying wtf-ing ...

    Also, that 0 comments is hilarious since the guy's protecting himself from a multiplication by 0! .. 0 times (1/6000).

  • (cs) in reply to llxx
    llxx:

    Lews:
    Anonymous:
    Assuming there are more than ten additional bits in a long data type compared to an int data type, there's also an overflow here.
    On x86 (I doubt the coder was using something else), longs and ints are both 4 bytes long.

    According to the official C spec, ints are supposed to be 2 bytes (16 bits), longs are 4 (32), and long long are 8 (64)...



    Ints are 2 bytes on some hardware, 4 bytes on others. For example, on 32-bit x86 they are 4 bytes long.

    If you need to make sure that your ints are the right size, use long (4 bytes) or short (2 bytes)
  • Travis (unregistered) in reply to Lews

    Except this is Java (Note the 'boolean'), on which that means nothing.
    int is 4, long is 8, short is 2.

  • (cs) in reply to llxx
    llxx:

    Anonymous:
    > Except integer division doesn't round, it truncates, and will therefore always go towards zero anyways.

    Depends on the "programming language"...

    No, it depends on the hardware. All of which I've seen so far that have a divide instruction will truncate.

    And how exactly would this depend on hardware? If I wrote a scripting language that translated user input: 100 \ 3 to something like Math.ceil(100.0 \ 3.0) then it would not depend on the hardware...

    Drak [pi][pi][pi]

  • vhawk (unregistered)

    I bow to the work of this 'artist'.  Even if I tried I could not come up with this. Skills like these are hard to find ....

  • -L (unregistered) in reply to Lews
    llxx:


    Ints are 2 bytes on some hardware, 4 bytes on others. For example, on 32-bit x86 they are 4 bytes long.

    If you need to make sure that your ints are the right size, use long (4 bytes) or short (2 bytes)

    No.

    For "portable" programs using current C compilers, use int8_t, int16_t, int32_t, int64_t (or the uintX_t types). Even though the standard says in 7.18.1 that these are optional (a wtf in itself). If your compiler for some platform does not support these types, generate your own portable defines.
  • Szeryf (unregistered) in reply to Martin Carolan

    but what happens if you divide 0 by DivisionByZeroException?????

  • Kevin (unregistered) in reply to Szeryf

    I don't know,  but this WTF seems really familiar to me, as if I've seen it before...
    Deja-vu anyone? Or am I just paranoid?

  • Kevin (unregistered) in reply to Kevin

    Yeah, I've found it! jesperdj posted it on the Sun Java Forums I visit regularly!

  • (cs) in reply to Mung Kee
    Mung Kee:

    I see.  I happened upon the scene when frailty of compilers wasn't such an issue.  Unfortunately, frailty of knowledge on the other hand, which plagues all who try a new language, will be with us forever.  In the end, it's the same struggle.


    I'd say that a compiler that barfs on some errors without giving you any hint what's wrong makes it rather difficult to attain non-frail knowledge in the language.... never change more than a few lines of a program before recompiling, otherwise you're in for a long trial-and-error hunt for the change that caused the problem.... Ugh, I don't want to imagine. Give me eclipse's compile-as-you-type error highlighting or give me death.
  • A Name (unregistered) in reply to -L
    For "portable" programs using current C compilers, use int8_t, int16_t, int32_t, int64_t (or the uintX_t types). Even though the standard says in 7.18.1 that these are optional (a wtf in itself). If your compiler for some platform does not support these types, generate your own portable defines.

    That's not a wtf. It doesn't make much sence to define them on platforms that can't support them.

  • Bronek Kozicki (unregistered) in reply to Dave Markle

    The same bug is in the math that I learned at school and all sane programming languages that I know; shame on me for attending poor school and learning wrong languages.

  • ABAP (unregistered) in reply to llxx
    llxx:

    Anonymous:
    > Except integer division doesn't round, it truncates, and will therefore always go towards zero anyways.

    Depends on the "programming language"...

    No, it depends on the hardware. All of which I've seen so far that have a divide instruction will truncate.

    No, ABAP converts 2.3 to 2, -2.3 to -2, but 2.7 to 3 ans -2.7 to -3, see

    http://help.sap.com/saphelp_erp2005/helpdata/en/fc/eb32e2358411d1829f0000e829fbfe/frameset.htm

  • (cs) in reply to llxx
    llxx:

    According to the official C spec, ints are supposed to be 2 bytes (16 bits), longs are 4 (32), and long long are 8 (64)...


    Bullshit. According to the Holy Standard, the only thing you can be sure of is that
    sizeof(long long) >= sizeof(long) >= sizeof(int) >= sizeof(char).

    So it is perfectly legal to make a compiler where all integral types are 2 bytes, or even 17 bytes.
  • ABAP (unregistered) in reply to ABAP
  • zootm (unregistered) in reply to kipthegreat
    kipthegreat:
    Not always true that 0/x = 0:  x could be 0 as well.  And if these were float/double values, x could also be NaN, +inf, -inf, or -0.0.

    Of course, this doesn't change the fact that there is nothing exceptional about 0 in the WTF code.

    All of those yield an undefined result, however, so division by zero is as good an error as any.
  • (cs) in reply to bullestock
    bullestock:

    Bullshit. According to the Holy Standard, the only thing you can be sure of is that
    sizeof(long long) >= sizeof(long) >= sizeof(int) >= sizeof(char).

    So it is perfectly legal to make a compiler where all integral types are 2 bytes, or even 17 bytes.


    Bullshit. Citing http://www.open-std.org/jtc1/sc22/wg14/www/docs/n843.htm:
    6.5.3.4  The sizeof operator
    ...
    [#3] When applied to an operand that has type char, unsigned
    char, or signed char, (or a qualified version thereof) the
    result is 1.

  • Nick (unregistered) in reply to brazzy
    brazzy:
    bullestock:

    Bullshit. According to the Holy Standard, the only thing you can be sure of is that
    sizeof(long long) >= sizeof(long) >= sizeof(int) >= sizeof(char).

    So it is perfectly legal to make a compiler where all integral types are 2 bytes, or even 17 bytes.


    Bullshit. Citing http://www.open-std.org/jtc1/sc22/wg14/www/docs/n843.htm:
    6.5.3.4  The sizeof operator
    ...
    [#3] When applied to an operand that has type char, unsigned
    char, or signed char, (or a qualified version thereof) the
    result is 1.



    Where does it say that those are bytes? char just has some unit length.

    (Replying to brazzy in case the quoting is buggered up)
  • (cs) in reply to Gene Wirchenko
    Gene Wirchenko:
    RobIII:

    I know! I know! I know!
    Ooooh, I have the answer!

    He should AVOID dividing. Dividing is just inverse multiplying!

    milliseconds * 0,0000166666666666667 = Minutes

    No negative/positive checks required, and no chance of encountering "DivideByZeroExecption"!

    [:D][H]


    That might work in your locale, but I think your octal constant has too many digits.  <BEG>

    Sincerely,

    Gene Wirchenko

    It took me a while to understand what you were saying. Finally I saw it. Excuse me for this type [:D]

    Ofcourse I meant:

    milliseconds * 0.0000166666666666667 = Minutes

  • (cs) in reply to RobIII
    RobIII:
    Gene Wirchenko:
    RobIII:

    I know! I know! I know!
    Ooooh, I have the answer!

    He should AVOID dividing. Dividing is just inverse multiplying!

    milliseconds * 0,0000166666666666667 = Minutes

    No negative/positive checks required, and no chance of encountering "DivideByZeroExecption"!

    [:D][H]


    That might work in your locale, but I think your octal constant has too many digits.  <BEG>

    Sincerely,

    Gene Wirchenko

    It took me a while to understand what you were saying. Finally I saw it. Excuse me for this type [:D]

    Ofcourse I meant:

    milliseconds * 0.0000166666666666667 = Minutes

    <FONT color=#ff0000 size=6>AARGH! I WANT EDIT FUNCTIONALITY!!! [:@]</FONT>

    <FONT color=#000000 size=2>"Type" is supposed to be "Typo"...</FONT>

  • (cs) in reply to llxx
    llxx:

    According to the official C spec, ints are supposed to be 2 bytes (16 bits), longs are 4 (32), and long long are 8 (64)...

    That could hardly be more wrong

    bullestock:
    llxx:

    According to the official C spec, ints are supposed to be 2 bytes (16 bits), longs are 4 (32), and long long are 8 (64)...


    Bullshit. According to the Holy Standard, the only thing you can be sure of is that
    sizeof(long long) >= sizeof(long) >= sizeof(int) >= sizeof(char).

    So it is perfectly legal to make a compiler where all integral types are 2 bytes, or even 17 bytes.

    Indeed, and sizeof(char) is defined as 1 by the Holy Standard if i remember well.

    brazzy:
    bullestock:

    Bullshit. According to the Holy Standard, the only thing you can be sure of is that
    sizeof(long long) >= sizeof(long) >= sizeof(int) >= sizeof(char).

    So it is perfectly legal to make a compiler where all integral types are 2 bytes, or even 17 bytes.


    Bullshit. Citing http://www.open-std.org/jtc1/sc22/wg14/www/docs/n843.htm:
    6.5.3.4  The sizeof operator
    ...
    [#3] When applied to an operand that has type char, unsigned
    char, or signed char, (or a qualified version thereof) the
    result is 1.

    "1" is the length of a char, a char is 1 char long. Wether said char is 1 byte or 4 doesn't matter.

    Char is the smallest data length of C as per the Holy Standard and the base for everything else (which is why sizeof returns a result in "size_t", which is the length of a char)

  • (cs) in reply to Nick
    Anonymous:

    Where does it say that those are bytes? char just has some unit length.


    If you'd bothered to look at the standard, you'd have seen:

    Semantics

    [#2] The sizeof operator yields the size (in bytes) of its operand,

    As someone has pointed out here a while ago, the only way to get char to be anything other than an 8-bit byte in C is by defining "byte" to have a different number of bits - the standard explicitly allows this.

    The standard really shows C's roots in the iron age of computing - it strongly equates bytes with characters. Theoretically, a system that is really unicode-based would have to define a byte as 32 bits because that's how much you need to "hold any member of the basic character set of the execution environment", which is what the standard requires a byte to be able to.
  • (cs) in reply to brazzy
    brazzy:
    bullestock:

    Bullshit. According to the Holy Standard, the only thing you can be sure of is that
    sizeof(long long) >= sizeof(long) >= sizeof(int) >= sizeof(char).

    So it is perfectly legal to make a compiler where all integral types are 2 bytes, or even 17 bytes.


    Bullshit. Citing http://www.open-std.org/jtc1/sc22/wg14/www/docs/n843.htm:
    6.5.3.4  The sizeof operator
    ...
    [#3] When applied to an operand that has type char, unsigned
    char, or signed char, (or a qualified version thereof) the
    result is 1.

    Sorry, of course I meant "all integral types *except* char".
  • (cs) in reply to tedbilly

    Why are 90% of the comments here written by and/or directed at third graders? I see ample opportunity for a sociology thesis. Anyone?

  • (cs)
    Alex Papadimoulis:
      if (milliDiff < 0)
      {
        negative = true;
        milliDiff = -milliDiff; // Make positive (is easier)
      }
    


    Shouldn't that be:

    negative = -true;

    ?
  • csrster (unregistered) in reply to nickelarse

    Nobody has pointed out the real WTFs which are those horrible magic
    numbers 60 and 1000. Basic coding practice says these should be:

    /
    A thousand
    /
    private static final int ONE_THOUSAND = 1000;

    /
    Sixty
    /
    private static final int SIXTY = 60;

  • (cs)

    This is the best comment ever:

    // Watch out for exceptional value 0	

  • Anonymouse (unregistered) in reply to Jon
    Anonymous:
    Anonymous:
    I like how the comments on this site are usually bigger WTFs than the posts themselves.


    Me too. There seems to be a prevalance of dumb people with no sense of humour who read thedailywtf.com. My request to these dumb people, is please keep posting, I enjoy laughing at you.

    And the best thing is that the dumb people i'm talking about don't even know i'm talking about them.


    Shizofrenia is a wonderful thing..
  • (cs) in reply to csrster
    Anonymous:
    Nobody has pointed out the real WTFs which are those horrible magic
    numbers 60 and 1000. Basic coding practice says these should be:

    /**
    * A thousand
    */
    private static final int ONE_THOUSAND = 1000;

    /**
    * Sixty
    */
    private static final int SIXTY = 60;


    oooh close but no cigar

    private static final int MILLISECONDS_PER_SECOND = 1000;

    private static final int SECONDS_PER_MINUTE = 60;


    Or, of you're a proponent of Semantic/Verbose Programming:

    private static final int MILLISECONDS_PER_SECOND_DIVIDER = 1000;

    private static final int SECONDS_PER_MINUTE_DIVIDER = 60;



    Rendering the final code as:

    minutesDiff  = (int) (milliDiff / MILLISECONDS_PER_SECOND_DIVIDER) / SECONDS_PER_MINUTE_DIVIDER;


    There's a fine example of a Good Practice.
  • Kevin (unregistered) in reply to dhromed
    dhromed:
    Anonymous:
    Nobody has pointed out the real WTFs which are those horrible magic
    numbers 60 and 1000. Basic coding practice says these should be:

    /**
    * A thousand
    */
    private static final int ONE_THOUSAND = 1000;

    /**
    * Sixty
    */
    private static final int SIXTY = 60;


    oooh close but no cigar

    private static final int MILLISECONDS_PER_SECOND = 1000;

    private static final int SECONDS_PER_MINUTE = 60;


    Or, of you're a proponent of Semantic/Verbose Programming:

    private static final int MILLISECONDS_PER_SECOND_DIVIDER = 1000;

    private static final int SECONDS_PER_MINUTE_DIVIDER = 60;



    Rendering the final code as:

    minutesDiff  = (int) (milliDiff / MILLISECONDS_PER_SECOND_DIVIDER) / SECONDS_PER_MINUTE_DIVIDER;


    There's a fine example of a Good Practice.


    Yeah, thats better than defining


  • Kevin (unregistered) in reply to Kevin
    Anonymous:
    dhromed:
    Anonymous:
    Nobody has pointed out the real WTFs which are those horrible magic
    numbers 60 and 1000. Basic coding practice says these should be:

    /**
    * A thousand
    */
    private static final int ONE_THOUSAND = 1000;

    /**
    * Sixty
    */
    private static final int SIXTY = 60;


    oooh close but no cigar

    private static final int MILLISECONDS_PER_SECOND = 1000;

    private static final int SECONDS_PER_MINUTE = 60;


    Or, of you're a proponent of Semantic/Verbose Programming:

    private static final int MILLISECONDS_PER_SECOND_DIVIDER = 1000;

    private static final int SECONDS_PER_MINUTE_DIVIDER = 60;



    Rendering the final code as:

    minutesDiff  = (int) (milliDiff / MILLISECONDS_PER_SECOND_DIVIDER) / SECONDS_PER_MINUTE_DIVIDER;


    There's a fine example of a Good Practice.


    Yeah, thats better than defining


    Feck, my code tags got buggered up. What i wanted to say was:

    That's better than defining

    public static final int TWO_HUNDRED_THOUSAND_SIX_THOUSAND_FOUR_HUNDRED_AND_FOURTY_THREE = 206443;

    I rest my case.
  • (cs) in reply to brazzy
    brazzy:
    Mung Kee:

    I see.  I happened upon the scene when frailty of compilers wasn't such an issue.  Unfortunately, frailty of knowledge on the other hand, which plagues all who try a new language, will be with us forever.  In the end, it's the same struggle.


    I'd say that a compiler that barfs on some errors without giving you any hint what's wrong makes it rather difficult to attain non-frail knowledge in the language.... never change more than a few lines of a program before recompiling, otherwise you're in for a long trial-and-error hunt for the change that caused the problem.... Ugh, I don't want to imagine. Give me eclipse's compile-as-you-type error highlighting or give me death.


    Agreed.  I wonder if anyone here uses it for C# development.  If so, is it any good?  Most Java developers I know swear by it, but I'm not sure if the number of plug-ins make it worthwhile for C#.
  • (cs) in reply to brazzy
    brazzy:
    Anonymous:

    Where does it say that those are bytes? char just has some unit length.


    If you'd bothered to look at the standard, you'd have seen:

    Semantics

    [#2] The sizeof operator yields the size (in bytes) of its operand,

    As someone has pointed out here a while ago, the only way to get char to be anything other than an 8-bit byte in C is by defining "byte" to have a different number of bits - the standard explicitly allows this.

    The standard really shows C's roots in the iron age of computing - it strongly equates bytes with characters. Theoretically, a system that is really unicode-based would have to define a byte as 32 bits because that's how much you need to "hold any member of the basic character set of the execution environment", which is what the standard requires a byte to be able to.


    The original K&R book (not the ANSI edition) lists one platform with a C compiler where char, int, short, and long all have exactly 36 bits, which you will notice is not an multiplel of 8. 

    There were also machines with 12 bit bytes (pdp8?), when hex notation is not so nice, but octal works great.   I wonder what Purdue graduates would think of such a concept.

    In these days nobody makes such machines, but back in the bug iron age a byte was not always 8 bits.
  • (cs) in reply to kipthegreat

    Did you ever consider that he may not be trying to make it easier for the machine, but instead this conversion makes it easer for him? Perhaps he just cannot visualize how negative division works. : )

  • (cs) in reply to Mung Kee
    Mung Kee:
    brazzy:
    Mung Kee:

    I see.  I happened upon the scene when frailty of compilers wasn't such an issue.  Unfortunately, frailty of knowledge on the other hand, which plagues all who try a new language, will be with us forever.  In the end, it's the same struggle.


    I'd say that a compiler that barfs on some errors without giving you any hint what's wrong makes it rather difficult to attain non-frail knowledge in the language.... never change more than a few lines of a program before recompiling, otherwise you're in for a long trial-and-error hunt for the change that caused the problem.... Ugh, I don't want to imagine. Give me eclipse's compile-as-you-type error highlighting or give me death.


    Agreed.  I wonder if anyone here uses it for C# development.  If so, is it any good?  Most Java developers I know swear by it, but I'm not sure if the number of plug-ins make it worthwhile for C#.


    I haven't used Eclipse since college, but we can get background compilation in VS.NET using Resharper.  I'm told that a lot of the functionality of resharper is also in Eclipse, but I can't be a good judge of that since I can hardly remember what Eclipse had 3 years ago.

    Amazingly, the bastard child known as VB.NET has background compilation out of the box in VS.NET 2003.  I think that VS2005 is supposed to have it for both languages, but I'm not sure of that either.
  • (cs) in reply to hank miller
    hank miller:

    The original K&R book (not the ANSI edition) lists one platform with a C compiler where char, int, short, and long all have exactly 36 bits, which you will notice is not an multiplel of 8. 

    There were also machines with 12 bit bytes (pdp8?), when hex notation is not so nice, but octal works great.   I wonder what Purdue graduates would think of such a concept.

    In these days nobody makes such machines, but back in the bug iron age a byte was not always 8 bits.


    Bytes having more (or less) than 8 bits is fine - I can deal with that (alt least theoretically), it's just an arbitrary grouping.

    What irks me is the assumption that 1 byte = one character from a fixed charset. But it must have seemed natural at a time when all your hardware and software came from one vendor, when your were an "IBM shop" meaning that Token Ring was THE network, EBCDIC was THE text encoding, and wanting your machine to talk to a machine that was from DEC and used Ethernet and ASCII was just pure madness.

    This mindset pretty much explains the big steaming pile of WTF that is the multitude of text encodings we have to deal with today. I shudder at the thought of how it must have been like in 1970 to be a computer engineer e.g from Japan, trying to explain that no, your language's character set does NOT just require a few characters with funny dots and dashes to be added, and does in fact NOT fit into 128  or even 256 slots.

    Well, now at least I know whom to blame for the difficulty in teaching aspiring Java programmers the difference between String and byte[] and why the former should never be used to hold the latter, and the reverse only with an explicitly specified character encoding.

  • anonymous (unregistered) in reply to Gene Wirchenko

    No! it should be
    int convert_milliseconds_to_minutes(long int t) {
            long int t5 = t>>5;
            return ((t5<<1)+(t5<<2)+(t5<<3)+(t5<<6)+(t5<<7)+(t5<<8)+(t5<<9)
            +(t5<<11)+(t5<<15))>>26;
    }
    (Yes, I know this will not work).

  • (cs) in reply to llxx
    llxx:

    Lews:
    Anonymous:
    Assuming there are more than ten additional bits in a long data type compared to an int data type, there's also an overflow here.
    On x86 (I doubt the coder was using something else), longs and ints are both 4 bytes long.

    According to the official C spec, ints are supposed to be 2 bytes (16 bits), longs are 4 (32), and long long are 8 (64)...


    Best comment in a while.

    Concerning compiler barfing: Anyone who used old versions of gcc or Turbo anything knows all about compilers crashing over perfectly legal code, let alone bad code. Template instantiation in particular has been crash prone in every C++ compiler at some point.
  • anonymous (unregistered) in reply to anonymous

    I mean't:

    Anonymous:
    No! it should be
    int convert_milliseconds_to_minutes(long int t) {
    .....long int t5 = t>>5;
    .....return ((t5<<1)+(t5<<2)+(t5<<3)+(t5<<6)+(t5<<7)+(t5<<8)+(t5<<9)
    .................+(t5<<11)+(t5<<15))>>26;
    }
    (Yes, I know this will not work).
  • (cs) in reply to brazzy

    brazzy:
    Bytes having more (or less) than 8 bits is fine - I can deal with that (alt least theoretically), it's just an arbitrary grouping.

    What irks me is the assumption that 1 byte = one character from a fixed charset. But it must have seemed natural at a time when all your hardware and software came from one vendor, when your were an "IBM shop" meaning that Token Ring was THE network, EBCDIC was THE text encoding, and wanting your machine to talk to a machine that was from DEC and used Ethernet and ASCII was just pure madness.

    It isn't assumed (at least it's not in C++, I assume the same holds true for C). See 26.3 in http://www.parashift.com/c++-faq-lite/intrinsic-types.html

    I highly recommend the book the website forms a subset of for anyone doing C++ work.

  • (cs) in reply to brazzy
    brazzy:

    What irks me is the assumption that 1 byte = one character from a fixed charset. But it must have seemed natural at a time when all your hardware and software came from one vendor, when your were an "IBM shop" meaning that Token Ring was THE network, EBCDIC was THE text encoding, and wanting your machine to talk to a machine that was from DEC and used Ethernet and ASCII was just pure madness.

    This mindset pretty much explains the big steaming pile of WTF that is the multitude of text encodings we have to deal with today. I shudder at the thought of how it must have been like in 1970 to be a computer engineer e.g from Japan, trying to explain that no, your language's character set does NOT just require a few characters with funny dots and dashes to be added, and does in fact NOT fit into 128  or even 256 slots.

    Well, now at least I know whom to blame for the difficulty in teaching aspiring Java programmers the difference between String and byte[] and why the former should never be used to hold the latter, and the reverse only with an explicitly specified character encoding.


    That is like saying, "It irks me that Columbus sailed across the ocean.  It is a WTF that we have all these harbors when we really need airports that handle jumbo jets."

    Put in in historical context.  When a few K of RAM cost thousands of dollars and disk drives storing 10 MB were the size of washing machines, These kinds of optimizations were needed.  Every bit was precious.

    Obviously, we wouldn't design a system like this today when a GB of ram can be had for $50 and a  TB of hard drive storage is available for the PC market for a few hundred dollars.  But we are stuck with a legacy.
  • nobody (unregistered) in reply to RobIII
    RobIII:
    I know! I know! I know! Ooooh, I have the answer!

    He should AVOID dividing. Dividing is just inverse multiplying!

    milliseconds * 0,0000166666666666667 = Minutes

    No negative/positive checks required, and no chance of encountering "DivideByZeroExecption"!

    No! That code breaks the principle of readability in code. It has too many instances of the number "6".

  • (cs) in reply to g__
    g__:
    brazzy:

    What irks me is the assumption that 1 byte = one character from a fixed charset.

    It isn't assumed (at least it's not in C++, I assume the same holds true for C). See 26.3 in http://www.parashift.com/c++-faq-lite/intrinsic-types.html

    I highly recommend the book the website forms a subset of for anyone doing C++ work.


    And highly I recommend referring to the language spec rather than someone's possibly very useful but certainly not official FAQ. As I cited before, the C standard (of which the C++ standard is merely an extension) defines a byte as "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment". Since nowadays 1 byte = 8 bit is the norm dictated by hardware makers, but 8 bits are not enough for the charsets we want to use, disregarding this part of the standard is considered a Best Practice, which is basically a language WTF.

  • (cs) in reply to RevMike
    RevMike:

    That is like saying, "It irks me that Columbus sailed across the ocean.  It is a WTF that we have all these harbors when we really need airports that handle jumbo jets."

    Put in in historical context.  When a few K of RAM cost thousands of dollars and disk drives storing 10 MB were the size of washing machines, These kinds of optimizations were needed.  Every bit was precious.

    Obviously, we wouldn't design a system like this today when a GB of ram can be had for $50 and a  TB of hard drive storage is available for the PC market for a few hundred dollars.  But we are stuck with a legacy.


    What I'm criticizing is that the standard blurs the distinction between binary data (bytes) and text data (characters) while permitting the mapping between them to be implicit, platform-dependant and inadequate to represent more than a handful of languages. The first three could have been done better without needing any additional space whatsoever, and the last with using extra space only when necessary.

    Also, note that at the same time, despite the high costs for RAM and HD space, it was common to store and manipulate numerical data as text strings, which wastes space AND processor time.

  • Optitron (unregistered)

    The negative rounding problem people have mentioned is for the common +.5 trick used to round to the nearest when converting from float to int.

    So default:
    2.3 -> 2
    -2.3 -> -2
    2.7 -> 2
    -2.7 -> -2

    And you want
    (start float) -> (int conversion)
    2.3 -> 2
    -2.3 -> -2
    2.7 -> 3
    -2.7 -> -3

    Well, if you add .5 to the float and then convert, for positive numbers this works, so
    (start float) = (after .5 addition) -> (int conversion)
    2.3 = 2.8 -> 2 (correct)
    -2.3 = -1.8 -> -1 (incorrect)
    2.7 = 3.2 -> 3 (correct)
    -2.7 = -2.2 -> -2 (incorrect)

    So, if those numbers were made positive, added .5 to, and then made negative again, then it'd work. Of course a competent programmer would just negate .5 instead:
    (start float) = (after .5 subtraction) -> (int conversion)
    -2.3 = -2.8 -> -2 (correct)
    -2.7 -> -3.2 -> -3 (correct)

  • (cs) in reply to AndrewVos
    AndrewVos:
    Alex Papadimoulis:

     //Watch out for exceptional value 0
      

    BWAAAAHAHAHAHHAHAHAHAAHHAHA! What a tosser!

    ummm.. I like zero... I think that zero is quite exceptional, in a not really positive and not really negative way.  Sort of like something that is nothing, but still is something even thought it is nothing, if you know what I mean.

     

Leave a comment on “The Millisecond Converter”

Log In or post as a guest

Replying to comment #:

« Return to Article