Keeping NaNs with pandas dataframe inequalities

Question:

I have a pandas.DataFrame object that contains about 100 columns and 200000 rows of data. I am trying to convert it to a bool dataframe where True means that the value is greater than the threshold, False means that it is less, and NaN values are maintained.

If there are no NaN values, it takes about 60 ms for me to run:

df >= threshold

But when I try to deal with the NaNs, the below method works, but is very slow (20 sec).

def func(x):
    if x >= threshold:
        return True
    elif x < threshold:
        return False
    else:
        return x
df.apply(lambda x: x.apply(lambda x: func(x)))

Is there a faster way?

Asked By: jsignell

||

Answers:

You can check for NaNs separately using this post: Python – find integer index of rows with NaN in pandas

df.isnull()

Combine the output of isnull with df >= threshold using bitwise or:

df.isnull() | df >= threshold

You can expect the two masks to take closer to 200ms to compute and combine, but that should be far enough away from 20s to be OK.

Answered By: Mad Physicist

You can do:

new_df = df >= threshold
new_df[df.isnull()] = np.NaN

But that is different from what you will get using the apply method. Here your mask has float dtype containing NaN, 0.0 and 1.0. In the apply solution you get object dtype with NaN, False, and True.

Neither are OK to be used as a mask because you might not get what you want. IEEE says that any NaN comparison must yield False and the apply method is implicitly violates that by returning NaN!

The best option is to keep track of the NaNs separately and df.isnull() is quite fast when bottleneck is installed.

Answered By: ocefpaf

In this situation I use an indicator array of floats, coded as: 0=False, 1=True, and NaN=missing. A Pandas DataFrame with bool dtype cannot have missing values, and a DataFrame with object dtype holding a mix of Python bool and float objects is not efficient. This leads us to using DataFrames with np.float64 dtype. numpy.sign(x - threshold) gives -1 = (x < threshold), 0 = (x == threshold) and +1 = (x > threshold) for your comparison, which might be good enough for your purposes, but if you really need 0/1 coding, the conversion can be made in-place. Timings below are on a 200K length array x:

In [45]: %timeit y = (x > 0); y[pd.isnull(x)] = np.nan
100 loops, best of 3: 8.71 ms per loop

In [46]: %timeit y = np.sign(x)
100 loops, best of 3: 1.82 ms per loop

In [47]: %timeit y = np.sign(x); y += 1; y /= 2
100 loops, best of 3: 3.78 ms per loop
Answered By: Kerby Shedden

Another option is to use mask:

df.mask(~df.isna(), df >= threshold)

This will only apply the condition to non-nan values and leave the nan values untouched

Answered By: Bruno Carballo

This will keep null values, and transform the rest to boolean depending on whether they are greater than the specified threshold:

from pandas import isnull

df["my_score"].apply(lambda score: score if isnull(score) else score >= threshold)
Answered By: s2t2
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.