Why does str(float) return more digits in Python 3 than Python 2?

Question:

In Python 2.7, repr of a float returns the nearest decimal number up to 17 digits long; this is precise enough to uniquely identify each possible IEEE floating point value. str of a float worked similarly, except that it limited the result to 12 digits; for most purposes this is a more reasonable result, and insulates you from the slight differences between binary and decimal representation.

Python 2 demo: http://ideone.com/OKJtxv

print str(1.4*1.5)
2.1
print repr(1.4*1.5)
2.0999999999999996

In Python 3.2 it appears str and repr return the same thing.

Python 3 demo: http://ideone.com/oAKRsb

print(str(1.4*1.5))
2.0999999999999996
print(repr(1.4*1.5))
2.0999999999999996

Is there a PEP that describes the change, or some other statement from someone responsible?

Asked By: Mark Ransom

||

Answers:

No, there’s no PEP. There’s an issue in the bug tracker, and an associated discussion on the Python developers mailing list. While I was responsible for proposing and implementing the change, I can’t claim it was my idea: it had arisen during conversations with Guido at EuroPython 2010.

Some more details: as already mentioned in comments, Python 3.1 introduced a new algorithm for the string repr of a float, (later backported to the Python 2 series, so that it also appears in Python 2.7). As a result of this new algorithm, a “short” decimal number typed in at the prompt has a correspondingly short representation. This eliminated one of the existing reasons for the difference between str and repr, and made it possible to use the same algorithm for both str and repr. So for Python 3.2, following the discussion linked to above, str and repr were made identical. As to why: it makes the language a little bit smaller and cleaner, and it removes the rather arbitrary choice of 12 digits when outputting the string. (The choice of 17 digits used for the repr in Python versions prior to 2.7 is far from arbitrary, by the way: two distinct IEEE 754 binary64 floats will have distinct representations when converted to decimal with 17 significant digits, and 17 is the smallest integer with this property.)

Apart from simplicity, there are some less obvious benefits. One aspect of the repr versus str distinction that’s been confusing for users in the past is the fact that repr automatically gets used in containers. So for example in Python 2.7:

>>> x = 1.4 * 1.5
>>> print x
2.1
>>> print [x]
[2.0999999999999996]

I’m sure there’s at least one StackOverflow question asking about this phenomenon somewhere: here is one such, and another more recent one. With the simplification introduced in Python 3.2, we get this instead:

>>> x = 1.4 * 1.5
>>> print(x)
2.0999999999999996
>>> print([x])
[2.0999999999999996]

which is at least more consistent.

If you do want to be able to hide imprecisions, the right way to do it remains the same: use string formatting for precise control of the output format.

>>> print("{:.12g}".format(x))
2.1

I hope that explains some of the reasoning behind the change. I’m not going to argue that it’s universally beneficial: as you point out, the old str had the convenient side-effect of hiding imprecisions. But in my opinion (of course, I’m biased), it does help eliminate a few surprises from the language.

Answered By: Mark Dickinson