# Why does timestamp() show an extra microsecond compared with subtracting 1970-01-01?

## Question:

The following differ by 1 microsecond :

``````In [37]: datetime(2514, 5, 30, 1, 53, 4, 986754, tzinfo=dt.timezone.utc) - datetime(1970,1,1, tzinfo=dt.timezone.utc)
Out[37]: datetime.timedelta(days=198841, seconds=6784, microseconds=986754)

In [38]: datetime(2514, 5, 30, 1, 53, 4, 986754, tzinfo=dt.timezone.utc).timestamp()
Out[38]: 17179869184.986755
``````

The number of microseconds in `986754` in the first case, and `986755` in the second.

Is this just Python floating point arithmetic error, or is there something else I’m missing?

It is a floating-point approximation. If you just type

``````17179869184.986754
``````

into Python, you will get

``````17179869184.986755
``````

The former is not expressible with the precision available to Python’s float type.

Given that `17179869184.986754` is between `2^34` and `2^35`, and a double precision float has a 53 bit significand, you get

``````(2**35 - 2**34) / 2**53 = 1.9073486328125e-06
``````

i.e. your precision is well above the microsecond (meaning worse than).

• this is not specific to Python; it’s a property of the IEEE 754 floating point number implementation
• the representation of Unix time as floating point number of seconds since 1970-01-01 is not generally less accurate than one microsecond – it depends on the absolute value of the number

It an error of 1.37109375 usecs due to conversion to a 64 bit float:

print("%100.100fn" % (17179869184.986754))
17179869184.9867553710937500000000000000000000000000000000000000000000000000000000000000000000000000000000000000

17179869184.98675537109375
17179869184.986755

Categories: questions
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.