What might be the cause of 'invalid value encountered in less_equal' in numpy

Question:

I experienced a RuntimeWarning

 RuntimeWarning: invalid value encountered in less_equal

Generated by this line of code of mine:

center_dists[j] <= center_dists[i]

Both center_dists[j] and center_dists[i] are numpy arrays

What might be the cause of this warning ?

Asked By: Alex Gao

||

Answers:

That’s most likely happening because of a np.nan somewhere in the inputs involved. An example of it is shown below –

In [1]: A = np.array([4, 2, 1])

In [2]: B = np.array([2, 2, np.nan])

In [3]: A<=B
RuntimeWarning: invalid value encountered in less_equal
Out[3]: array([False,  True, False], dtype=bool)

For all those comparisons involving np.nan, it would output False. Let’s confirm it for a broadcasted comparison. Here’s a sample –

In [1]: A = np.array([4, 2, 1])

In [2]: B = np.array([2, 2, np.nan])

In [3]: A[:,None] <= B
RuntimeWarning: invalid value encountered in less_equal
Out[3]: 
array([[False, False, False],
       [ True,  True, False],
       [ True,  True, False]], dtype=bool)

Please notice the third column in the output which corresponds to the comparison involving third element np.nan in B and that results in all False values.

Answered By: Divakar

As a follow-up to Divakar’s answer and his comment on how to suppress the RuntimeWarning, a safer way is suppressing them only locally using with np.errstate() (docs): it is good to generally be alerted when comparisons to np.nan yield False, and ignore the warning only when this is really what is intended. Here for the OP’s example:

with np.errstate(invalid='ignore'):
  center_dists[j] <= center_dists[i]

Upon exiting the with block, error handling is reset to what it was before.

Instead of invalid value encountered, one can also ignore all errors by passing all='ignore'. Interestingly, this is missing from the kwargs in the docs for np.errstate(), but not in the ones for np.seterr(). (Seems like a small bug in the np.errstate() docs.)

Answered By: Ulrich Stern

This happens due to Nan values in dataframe, which is completely fine with DF.

In Pycharm, This worked like a charm for me:

import warnings

warnings.simplefilter(action = "ignore", category = RuntimeWarning)
Answered By: Revanth M

Numpy dtypes are so strict. So it doesnt produce an array like np.array([False, True, np.nan]), it returns array([ 0., 1., nan]) which a float array.

If you try to change a bool array like:

x= np.array([False, False, False])
x[0] = 5

will retrun array([ True, False, False]) … wow

But I think 5>np.nan cannot be False, it should be nan, False would mean that a data comparison has been made and it returned the result like 3>5, which I think it’s a disaster. Numpy produces data that we actually don’t have. If it could have returned nan then we could handle it with ease.

So I tried to modify the behavior with a function.

def ngrater(x, y):
    with np.errstate(invalid='ignore'):
        c=x>y
        c=c.astype(np.object)
        c[np.isnan(x)] = np.nan
        c[np.isnan(y)] = np.nan
        return c
a = np.array([np.nan,1,2,3,4,5, np.nan, np.nan, np.nan]) #9 elements
b = np.array([0,1,-2,-3,-4,-5, -5, -5, -5]) #9 elements

ngrater(a,b)

returns:
array([nan, False, True, True, True, True, nan, nan, nan], dtype=object)

But I think whole memory structure is changed in that way. Instead of getting a memory-block with uniform unites, it will produce a block of pointers, where the real data is somewhere else. So function may perform slower and probably that’s why Numpy doesn’t do that. We need a superBool dtype which will contain also np.nan, or we just have to use float arrays +1:True, -1:False, nan:nan

Adding to the above answers another way to suppress this warning is to use numpy.less explicitly, supplying the where and out parameters:

np.less([1, 2], [2, np.nan])  

outputs: array([ True, False]) causing the runtime warning,

np.less([1, 2], [2, np.nan], where=np.isnan([2, np.nan])==False)

does not calculate result for the 2nd array element according to the docs leaving the value undefined (I got True output for both elements), while

np.less([1, 2], [2, np.nan], where=np.isnan([2, np.nan])==False, out=np.full((1, 2), False)

writes the result into an array pre-initilized to False (and so always gives False in the 2nd element).

Answered By: Yuri Feldman
Categories: questions Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.