How to fill dataframe Nan values with empty list [] in pandas?
Question:
This is my dataframe:
date ids
0 2011-04-23 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,...
1 2011-04-24 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,...
2 2011-04-25 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,...
3 2011-04-26 Nan
4 2011-04-27 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,...
5 2011-04-28 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,...
I want to replace Nan
with []. How to do that? Fillna([]) did not work. I even tried replace(np.nan, [])
but it gives error:
TypeError('Invalid "to_replace" type: 'float'',)
Answers:
You can first use loc
to locate all rows that have a nan
in the ids
column, and then loop through these rows using at
to set their values to an empty list:
for row in df.loc[df.ids.isnull(), 'ids'].index:
df.at[row, 'ids'] = []
>>> df
date ids
0 2011-04-23 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
1 2011-04-24 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
2 2011-04-25 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
3 2011-04-26 []
4 2011-04-27 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
5 2011-04-28 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
Without assignments:
1) Assuming we have only floats and integers in our dataframe
import math
df.apply(lambda x:x.apply(lambda x:[] if math.isnan(x) else x))
2) For any dataframe
import math
def isnan(x):
if isinstance(x, (int, long, float, complex)) and math.isnan(x):
return True
df.apply(lambda x:x.apply(lambda x:[] if isnan(x) else x))
After a lot of head-scratching I found this method that should be the most efficient (no looping, no apply), just assigning to a slice:
isnull = df.ids.isnull()
df.loc[isnull, 'ids'] = [ [[]] * isnull.sum() ]
The trick was to construct your list of []
of the right size (isnull.sum()
), and then enclose it in a list: the value you are assigning is a 2D array (1 column, isnull.sum()
rows) containing empty lists as elements.
My approach is similar to @hellpanderrr’s, but instead tests for list-ness rather than using isnan
:
df['ids'] = df['ids'].apply(lambda d: d if isinstance(d, list) else [])
I originally tried using pd.isnull
(or pd.notnull
) but, when given a list, that returns the null-ness of each element.
Create a function that checks your condition, if not, it returns an empty list/empty set etc.
Then apply that function to the variable, but also assigning the new calculated variable to the old one or to a new variable if you wish.
aa=pd.DataFrame({'d':[1,1,2,3,3,np.NaN],'r':[3,5,5,5,5,'e']})
def check_condition(x):
if x>0:
return x
else:
return list()
aa['d]=aa.d.apply(lambda x:check_condition(x))
Maybe more dense:
df['ids'] = [[] if type(x) != list else x for x in df['ids']]
Another solution using numpy:
df.ids = np.where(df.ids.isnull(), pd.Series([[]]*len(df)), df.ids)
Or using combine_first:
df.ids = df.ids.combine_first(pd.Series([[]]*len(df)))
This is probably faster, one liner solution:
df['ids'].fillna('DELETE').apply(lambda x : [] if x=='DELETE' else x)
Maybe not the most short/optimized solution, but I think is pretty readable:
# Masking-in nans
mask = df['ids'].isna()
# Filling nans with a list-like string and literally-evaluating such string
df.loc[mask, 'ids'] = df.loc[mask, 'ids'].fillna('[]').apply(eval)
Surprisingly, passing a dict with empty lists as values seems to work for Series.fillna
, but not DataFrame.fillna
– so if you want to work on a single column you can use this:
>>> df
A B C
0 0.0 2.0 NaN
1 NaN NaN 5.0
2 NaN 7.0 NaN
>>> df['C'].fillna({i: [] for i in df.index})
0 []
1 5
2 []
Name: C, dtype: object
The solution can be extended to DataFrames by applying it to every column.
>>> df.apply(lambda s: s.fillna({i: [] for i in df.index}))
A B C
0 0 2 []
1 [] [] 5
2 [] 7 []
Note: for large Series/DataFrames with few missing values, this might create an unreasonable amount of throwaway empty lists.
Tested with pandas
1.0.5.
A simple solution would be:
df['ids'].fillna("").apply(list)
As noted by @timgeb, this requires df['ids']
to contain lists or nan only.
Another solution that is explicit:
# use apply to only replace the nulls with the list
df.loc[df.ids.isnull(), 'ids'] = df.loc[df.ids.isnull(), 'ids'].apply(lambda x: [])
You can try this:
df.fillna(df.notna().applymap(lambda x: x or []))
This is my dataframe:
date ids
0 2011-04-23 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,...
1 2011-04-24 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,...
2 2011-04-25 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,...
3 2011-04-26 Nan
4 2011-04-27 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,...
5 2011-04-28 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,...
I want to replace Nan
with []. How to do that? Fillna([]) did not work. I even tried replace(np.nan, [])
but it gives error:
TypeError('Invalid "to_replace" type: 'float'',)
You can first use loc
to locate all rows that have a nan
in the ids
column, and then loop through these rows using at
to set their values to an empty list:
for row in df.loc[df.ids.isnull(), 'ids'].index:
df.at[row, 'ids'] = []
>>> df
date ids
0 2011-04-23 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
1 2011-04-24 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
2 2011-04-25 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
3 2011-04-26 []
4 2011-04-27 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
5 2011-04-28 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13]
Without assignments:
1) Assuming we have only floats and integers in our dataframe
import math
df.apply(lambda x:x.apply(lambda x:[] if math.isnan(x) else x))
2) For any dataframe
import math
def isnan(x):
if isinstance(x, (int, long, float, complex)) and math.isnan(x):
return True
df.apply(lambda x:x.apply(lambda x:[] if isnan(x) else x))
After a lot of head-scratching I found this method that should be the most efficient (no looping, no apply), just assigning to a slice:
isnull = df.ids.isnull()
df.loc[isnull, 'ids'] = [ [[]] * isnull.sum() ]
The trick was to construct your list of []
of the right size (isnull.sum()
), and then enclose it in a list: the value you are assigning is a 2D array (1 column, isnull.sum()
rows) containing empty lists as elements.
My approach is similar to @hellpanderrr’s, but instead tests for list-ness rather than using isnan
:
df['ids'] = df['ids'].apply(lambda d: d if isinstance(d, list) else [])
I originally tried using pd.isnull
(or pd.notnull
) but, when given a list, that returns the null-ness of each element.
Create a function that checks your condition, if not, it returns an empty list/empty set etc.
Then apply that function to the variable, but also assigning the new calculated variable to the old one or to a new variable if you wish.
aa=pd.DataFrame({'d':[1,1,2,3,3,np.NaN],'r':[3,5,5,5,5,'e']})
def check_condition(x):
if x>0:
return x
else:
return list()
aa['d]=aa.d.apply(lambda x:check_condition(x))
Maybe more dense:
df['ids'] = [[] if type(x) != list else x for x in df['ids']]
Another solution using numpy:
df.ids = np.where(df.ids.isnull(), pd.Series([[]]*len(df)), df.ids)
Or using combine_first:
df.ids = df.ids.combine_first(pd.Series([[]]*len(df)))
This is probably faster, one liner solution:
df['ids'].fillna('DELETE').apply(lambda x : [] if x=='DELETE' else x)
Maybe not the most short/optimized solution, but I think is pretty readable:
# Masking-in nans
mask = df['ids'].isna()
# Filling nans with a list-like string and literally-evaluating such string
df.loc[mask, 'ids'] = df.loc[mask, 'ids'].fillna('[]').apply(eval)
Surprisingly, passing a dict with empty lists as values seems to work for Series.fillna
, but not DataFrame.fillna
– so if you want to work on a single column you can use this:
>>> df
A B C
0 0.0 2.0 NaN
1 NaN NaN 5.0
2 NaN 7.0 NaN
>>> df['C'].fillna({i: [] for i in df.index})
0 []
1 5
2 []
Name: C, dtype: object
The solution can be extended to DataFrames by applying it to every column.
>>> df.apply(lambda s: s.fillna({i: [] for i in df.index}))
A B C
0 0 2 []
1 [] [] 5
2 [] 7 []
Note: for large Series/DataFrames with few missing values, this might create an unreasonable amount of throwaway empty lists.
Tested with pandas
1.0.5.
A simple solution would be:
df['ids'].fillna("").apply(list)
As noted by @timgeb, this requires df['ids']
to contain lists or nan only.
Another solution that is explicit:
# use apply to only replace the nulls with the list
df.loc[df.ids.isnull(), 'ids'] = df.loc[df.ids.isnull(), 'ids'].apply(lambda x: [])
You can try this:
df.fillna(df.notna().applymap(lambda x: x or []))