Why does math.cos(math.pi/2) not return zero?

Question:

I came across some weird behavior by math.cos() (Python 3.11.0):

>>> import math
>>> math.cos(math.pi)  # expected to get -1
-1.0
>>> math.cos(math.pi/2)  # expected to get 0
6.123233995736766e-17

I suspect that floating point math might play a role in this, but I’m not sure how. And if it did, I’d assume Python just checks if the parameter equaled math.pi/2 to begin with.

I found this answer by Jon Skeet, who said:

Basically, you shouldn’t expect binary floating point operations to be exactly right when your inputs can’t be expressed as exact binary values – which pi/2 can’t, given that it’s irrational.

But if this is true, then math.cos(math.pi) shouldn’t work either, because it also uses the math.pi approximation. My question is: why does this issue only show up when math.pi/2 is used?

Asked By: Michael M.

||

Answers:

The result for math.cos(math.pi) will have a similar degree of inaccuracy, however -1.0 + 6e-17 cannot be represented in floating point precision, so you just get -1.0. This can be demonstrated so:

>>> -1.0000_0000_0000_001
# -1.000000000000001
>>> -1.0000_0000_0000_0001
# -1.0
Answered By: Nick

Any error in math.pi vs. π (there always is some) makes very little difference in one case math.cos(math.pi) and is quite significant in math.cos(math.pi/2).


The curve is flat

When math.cos(x) is very near 1.0, the curve is very flat: the slope is "close" to zero. About 47 million floating point x values near π have a cos(x) mathematically more than -1.0, yet their value is closer to -1.0 than the next encodable value of -0.99999999999999988897…

The curve’s slope is 1

With x near π and math.cos(x/2) near 0.0, the cosine curve has a |slope| "close" to one. Both the next smaller and next larger encodable x have a different cos(x/2).

Conclusion

When the |result| of sin(x), cos(x) is near 1.0, many nearby x values will report 1.0.

This would be true even if some x value was incredible close to π.

For x near π (like math.pi) and y = |cos(x)|, we need about twice the precision in y to see an imprecision in x.

Two reasons:

  1. I think you know this, but it bears repeating: You did not take the cosine of π/2. You tried to, I know, but you didn’t. You took the cosine of 1.5707963267948965579989817342720925807952880859375, which is very close to π/2, but it is not exactly π/2. And the cosine of that number, as math.cos correctly told you, is a small number close to but not exactly equal to 0. If you could represent π/2 more accurately, you could get a result closer to 0, but under the present circumstances you can’t represent π/2 any more accurately than that: IEEE-754 floating point has finite precision, and that number 1.570…375 is the best you can do.

  2. In computer programming (as in life in general), when you do something wrong, sometimes you can get away with it. When something is not guaranteed to work, that doesn’t mean that it is guaranteed not to work. When you took cos(π), you got "lucky", and you got an answer of 1.0, even though in that case you were actually taking cos(3.141592653589793115997963468544185161590576171875). (If you had more precision to play with, you would have discovered that cos(3.141…875) is actually about -0.9999999999999999999999999999999925, but that number isn’t representable either, and rounds to 1.0.)

And, in fact, if you dig even deeper, these results aren’t random. Other answers explain why, mathematically, even though you can’t represent π or π/2 exactly, you can get a closer result when taking the cosine of one than the other.

Answered By: Steve Summit