Why does timestamp() show an extra microsecond compared with subtracting 1970-01-01?

Question:

The following differ by 1 microsecond :

In [37]: datetime(2514, 5, 30, 1, 53, 4, 986754, tzinfo=dt.timezone.utc) - datetime(1970,1,1, tzinfo=dt.timezone.utc)
Out[37]: datetime.timedelta(days=198841, seconds=6784, microseconds=986754)

In [38]: datetime(2514, 5, 30, 1, 53, 4, 986754, tzinfo=dt.timezone.utc).timestamp()
Out[38]: 17179869184.986755

The number of microseconds in 986754 in the first case, and 986755 in the second.

Is this just Python floating point arithmetic error, or is there something else I’m missing?

Asked By: ignoring_gravity

||

Answers:

It is a floating-point approximation. If you just type

17179869184.986754

into Python, you will get

17179869184.986755

The former is not expressible with the precision available to Python’s float type.

Answered By: khelwood

Given that 17179869184.986754 is between 2^34 and 2^35, and a double precision float has a 53 bit significand, you get

(2**35 - 2**34) / 2**53 = 1.9073486328125e-06

i.e. your precision is well above the microsecond (meaning worse than).

  • this is not specific to Python; it’s a property of the IEEE 754 floating point number implementation
  • the representation of Unix time as floating point number of seconds since 1970-01-01 is not generally less accurate than one microsecond – it depends on the absolute value of the number
Answered By: FObersteiner

It an error of 1.37109375 usecs due to conversion to a 64 bit float:

print("%100.100fn" % (17179869184.986754))
17179869184.9867553710937500000000000000000000000000000000000000000000000000000000000000000000000000000000000000

17179869184.98675537109375
17179869184.986755

Answered By: Joe