Juan M inherited some broken code. Upon investigation, the result turned out to be caused by a mix of assumptions.

The first assumption was in the way their users would interact with their scheduling system. Part of the assumption there was that they wouldn't try and schedule any events outside of the scope of a few human lifetimes. The other part of the assumption was that their serialization framework would have a consistent representation of datetimes that was reliably the number of seconds past the Unix epoch.

Their serialization framework had another assumption: that any sufficiently large timestamps must be in milliseconds.

# if greater than this, the number is in ms, if less than or equal it's in seconds # (in seconds this is 11th October 2603, in ms it's 20th August 1970) MS_WATERSHED = int(2e10) def from_unix_seconds(seconds: Union[int, float]) -> datetime: if seconds > MAX_NUMBER: return datetime.max elif seconds < -MAX_NUMBER: return datetime.min while abs(seconds) > MS_WATERSHED: seconds /= 1000 dt = EPOCH + timedelta(seconds=seconds) return dt.replace(tzinfo=timezone.utc)

This method is called from_unix_seconds, but in fact, if the number is larger than MS_WATERSHED it treats it as milliseconds. And this was almost certainly a hack, merging two different behaviors into one method because when you're round-tripping from JSON you just have a unitless number and have to guess at what it represents.

This code itself isn't something I'd say is a WTF, but its existence is. In today's most popular serialization format, JSON, there is not only no canonical date-time datatype representation, there also isn't any way to extend the format with new datatypes. I know we all hate XML for its complexity, but at least I could specify whether a number represented milliseconds or seconds in the data format itself, and not send my deserializer guessing when parsing the file.