Confused about decimal precision in Python

Question:

I use the Python decimal module to get around the floating point error. And from what I understand, I can set how many decimal places the decimal can have by setting the precision in the context. But I recently discovered that my understanding of the decimal precision is likely wrong as this is what happens when I run this code.

from decimal import Decimal, getcontext


getcontext().prec = 5


a = Decimal("80.05289")
b = Decimal("0.00015")

c = a * b
print(c)

Normally, this code would print 0.0120079335 as that is the correct solution for this calculation, but with the precision set to 5, I expected the result to be 0.01201 because the 7 will be rounded up.

The weird thing is, that none of these things happened and the result given to me by Python was 0.012008. As if I set the precision to 6. Can someone better explain to me what happened and how can I fix this issue to always have only 5 decimal places?

Asked By: Vladislav Korecký

||

Answers:

The decimal module incorporates a notion of significant digits. getcontext().prec = 5 thus indicates that you want your answer to have 5 significant digits. 0.012008 indeed has 5 significant digits. To get the result as 0.01201 you will have to change your code to getcontext().prec = 4.

Answered By: James Lee