Possible to have “a < b and not(a – b < 0)“ with floats

Question:

Is

a < b and not(a - b < 0)

possible for floating points due to floating point round of error?
Is there an example?

Asked By: SlimJim

||

Answers:

It depends on the floating point format, and specifically whether it has gradual underflow. IEEE 754 binary floating point does. That means the gap between two distinct floats is always non-zero, even if it can only be represented with one significant bit in the extreme case.

The smallest possible absolute difference between two distinct floats comes with equal sign (or one of the numbers zero), zero exponent, and significands that differ by 1. In that case, subtracting the larger from the smaller would result in the smallest magnitude negative number, with negative sign, zero exponent, and significand 1. That number compares less than zero.

There are other calculations that do underflow to zero. For example, dividing the smallest positive number by a number greater than or equal to two results in zero in the normal rounding mode.

Answered By: Patricia Shanahan

You can run through some of the “corner cases” easily like this

>>> from itertools import product
>>> for a, b in product([0.0, 1.0, float('-inf'), float('inf'), float('nan')], repeat=2):
...     print a < b and not(a - b < 0)... 
False
False
# and so on
Answered By: John La Rooy

[This answer is intended as a pedant’s supplement to the fine answer already given by Patricia Shanahan. That answer covers the normal case; here we’re worrying about edge cases that you’re unlikely to encounter in practice.]

Yes, it is perfectly possible. Here’s a Python session from my very ordinary, Intel-based Mac laptop:

Enthought Canopy Python 2.7.6 | 64-bit | (default, Jan 29 2014, 17:09:48) 
[GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import evil
>>> a = float.fromhex('1p-1022')
>>> b = float.fromhex('1.01p-1022')
>>> a < b and not(a - b < 0)
True

Of course, as the import indicates, there’s something evil going on here. Here are the contents of evil.py. (Warning: this is highly platform specific, and as written will only work on OS X. Variations of this should work on Linux/x64, provided that Python has been compiled to use the SSE2 instructions instead of the x87 unit for floating-point operations. I have no idea about Windows.)

# Being lazy: we should really import just the things we need.
from ctypes import *

# Type corresponding to fenv_t from fenv.h.  (Platform specific!)
class fenv_t(Structure):
    _fields_ = [("control", c_ushort), ("status", c_ushort),
                ("mxcsr", c_uint), ("reserved", c_char * 8)]

# Wrap fegetenv and fesetenv from the C library.
libc = CDLL("/usr/lib/libc.dylib")
fegetenv = libc.fegetenv
fegetenv.restype, fegetenv.argtypes = c_int, (POINTER(fenv_t),)
fesetenv = libc.fesetenv
fesetenv.restype, fesetenv.argtypes = c_int, (POINTER(fenv_t),)

# Set the flush-to-zero (FTZ) bit in the MXCSR control register.
env = fenv_t()
fegetenv(pointer(env))
env.mxcsr |= 1 << 15
fesetenv(pointer(env))

So what we’re doing here is messing with the FPU settings held in the MXCSR control register. On Intel x64 processors supporting the SSE instruction sets, there are two interesting flags that affect the behaviour of operations involving subnormal numbers. The FTZ (flush-to-zero) flag, when set, causes any subnormal output of an arithmetic operation to be replaced with zero. The DAZ (denormals-are-zero) flag, when set, causes any subnormal input to an arithmetic operation to be treated as though it’s zero. The point of these flags is that they can significantly speed up operations involving subnormal numbers, at the expense of sacrificing compliance with the IEEE 754 standard. In the code above, we’ve set the FTZ flag in the MXCSR control register.

And now we choose a and b so that both a and b are normal, but their difference is subnormal. Then a < b will be true (as usual), but a - b will be -0.0, and the comparison a - b < 0 fails.

The take-away point is that it’s not enough that the floating-point format you’re using is an IEEE 754 format. You also need to know that your operations comply with the standard. The FTZ mode is an example of one way that that might fail. To relate this to Patricia Shanahan’s answer: the code in evil.py is turning off the gradual underflow that IEEE 754 promises. (Thanks @EricPostpischil for pointing this out in the comments.)

Answered By: Mark Dickinson
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.