Float storage precision in hardware only to 13 decimals

Question:

There are many similar questions related to floating point, and I have read all of them that came up. Still, I am not able to grasp what I am doing wrong.

Python3 Cython reference interpreter on 64-bit x86 machine stores floats as double precision, 8 bytes, that is 64 bits

CPython implements float using C double type. The C double type usually implements IEEE 754 double-precision binary float, which is also called binary64.

From https://en.wikipedia.org/wiki/IEEE_754-1985, this would mean that I would get 16 decimal digit precision

Level Width Range at full precision Precision[a]
Double precision 64 bits ±2.23×10−308 to ±1.80×10308 Approximately 16 decimal digits

But in the code below, I have two 16-digit floats which are different at the 16th decimal digit. Whether I use Decimal or float, Python in my machine (Python 3.10, x86,64 bit, linux) is able to handle only to 13 decimal point. What am I missing?

lat1 = -81.0016666666670072 # or float(-81.0016666666670072)
lat2 = -81.0016666666670062  # or float(-81.0016666666670062)
print("Out as string lat1=-81.0016666666670072 lat2= -81.0016666666670062")
print(f"Precision 16 lat1:.16f {lat1:.16f} lat2:.16f{lat2:.16f}")

# Lets see how it is store in hardware
print(f"Stored in HW as lat1.hex() {lat1.hex()} lat2.hex() {lat2.hex()}")
x = float.fromhex(lat1.hex())
y = float.fromhex(lat2.hex())
print(f"Reconstructed from Hex lat1:.16f {x:.16f} lat2:.16f{y:.16f}")

try:
    assert lat1 != lat2 
except:
    # Assert false - means Python is telling lat1 == lat2
    print(f"Fail, {lat1} and {lat2} are really different at precision 16")

# try with Decimal
from decimal import *
getcontext().prec = 16

try:
   assert Decimal(lat1).compare(Decimal(lat2))
except:
    # Assert false - means Python is telling both are same
    print(f"Fail, Decimal(lat1) {Decimal(lat1):.16f} and Decimal(lat2) {Decimal(lat2):.16f} are really different at precision 16")

print("Reducing precision to 14")
lat1 = -81.00166666666711
lat2 = -81.00166666666710
print(f"At precision 14-still equal lat1:.14f {lat1:.14f} lat2:.14f{lat2:.14f}")

print("Reducing precision to 13")
lat1 = -81.0016666666671
lat2 = -81.0016666666670
# Lets see string representation
print(f"At precision 13-Not equal lat1:.13f {lat1:.13f} lat2:.13f{lat2:.13f}")

try:
   assert lat1 == lat2
except:
    # Assert false - means Python is telling lat1 != lat2, which is correct
    print(f"Pass, {lat1} and {lat2} are different")

Ouput

Out as string lat1=-81.0016666666670072 lat2= -81.0016666666670062
Precision 16 lat1:.16f -81.0016666666670062 lat2:.16f-81.0016666666670062
Stored in HW as lat1.hex() -0x1.4401b4e81b500p+6 lat2.hex() -0x1.4401b4e81b500p+6
Reconstructed from Hex lat1:.16f -81.0016666666670062 lat2:.16f-81.0016666666670062
Fail, -81.001666666667 and -81.001666666667 are really different at precision 16
Fail, Decimal(lat1) -81.0016666666670062 and Decimal(lat2) -81.0016666666670062 are really different at precision 16
Reducing precision to 14
At precision 14-still equal lat1:.14f -81.00166666666711 lat2:.14f-81.00166666666711
Reducing precision to 13
At precision 13-Not equal lat1:.13f -81.0016666666671 lat2:.13f-81.0016666666670
Pass, -81.0016666666671 and -81.001666666667 are different

Note – These are geo-location and I know I don’t need 16-digit accuracy. But still curious as to why I am only getting 13 decimal point precision

Also even though I went through many answers, not able to make out why there is no way to finding out the number of digits of precision supported on an OS/hardware through code.

Asked By: Alex Punnen

||

Answers:

81.0016666666670072 is 18 significant digits, not 16. :.16f prints 16 fixed point digits, yet double is floating point.

Start counting the Approximately 16 decimal digits from the first non-zero digit.

12 34567890123456
81.0016666666670072

Note that the precision is 53 binary digits. That roughly corresponds to about 15.95 decimal digits. So sometimes you only get 15.

Just to illustrate the answer by @chux

import sys
print(sys.float_info.dig)

lat1 = .29999999999999998 # precision 17 
lat2 = .29999999999999999 # precision 17 
print("Out as string lat1=.29999999999999998 lat2= .29999999999999999")
print(f"Precision 17 lat1:.17f {lat1:.17f} lat2:.17f{lat2:.17f}")
print(f"Precision 18 lat1:.18f {lat1:.18f} lat2:.18f{lat2:.18f}")

# Lets see how it is store in hardware
print(f"Stored in HW as lat1.hex() {lat1.hex()} lat2.hex() {lat2.hex()}")
x = float.fromhex(lat1.hex())
y = float.fromhex(lat2.hex())
print(f"Reconstructed from Hex lat1:.17f {x:.17f} lat2:.17f{y:.17f}")

try:
    assert lat1 != lat2 
    print(f"Pass 1, {lat1} and {lat2} are really different at precision 17")
except:
    # Assert false - means Python is telling lat1 == lat2
    print(f"Fail, {lat1} and {lat2} are really different at precision 17")

# try with Decimal
from decimal import *
getcontext().prec = 17

try:
   assert Decimal(lat1).compare(Decimal(lat2))
   print(f"Pass, Decimal(lat1) {Decimal(lat1):.17f} and Decimal(lat2) {Decimal(lat2):.17f} are really different at precision 17")
except:
    # Assert false - means Python is telling both are same
    print(f"Fail, Decimal(lat1) {Decimal(lat1):.17f} and Decimal(lat2) {Decimal(lat2):.17f} are really different at precision 17")

print("Reducing precision to 16")
lat1 = .2999999999999999 # 16 precision
lat2 = .2999999999999998
print(f"At precision 16 lat1:.16f {lat1:.16f} lat2:.15f{lat2:.16f}")
print(f"Stored in HW as lat1.hex() {lat1.hex()} lat2.hex() {lat2.hex()}")

try:
   assert lat1 == lat2
   print(f"Fail, {lat1} and {lat2} are different")
except:
    # Assert false - means Python is telling lat1 != lat2, which is correct
    print(f"Pass, {lat1} and {lat2} are different at precision 16")

Ouput

15
Out as string lat1=.29999999999999998 lat2= .29999999999999999
Precision 17 lat1:.17f 0.29999999999999999 lat2:.17f0.29999999999999999
Precision 18 lat1:.18f 0.299999999999999989 lat2:.18f0.299999999999999989
Stored in HW as lat1.hex() 0x1.3333333333333p-2 lat2.hex() 0x1.3333333333333p-2
Reconstructed from Hex lat1:.17f 0.29999999999999999 lat2:.17f0.29999999999999999
Fail, 0.3 and 0.3 are really different at precision 17
Fail, Decimal(lat1) 0.29999999999999999 and Decimal(lat2) 0.29999999999999999 are really different at precision 17
Reducing precision to 16
At precision 16 lat1:.16f 0.2999999999999999 lat2:.15f0.2999999999999998
Stored in HW as lat1.hex() 0x1.3333333333331p-2 lat2.hex() 0x1.3333333333330p-2
Pass, 0.2999999999999999 and 0.2999999999999998 are different at precision 16
Answered By: Alex Punnen
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.