Remove rows with empty lists from pandas data frame


I have a data frame with some columns with empty lists and others with lists of strings:

       donation_orgs                              donation_context
0            []                                           []
1   [the research of Dr. ...]   [In lieu of flowers , memorial donations ...]

I’m trying to return a data set without any of the rows where there are empty lists.

I’ve tried just checking for null values:

dfnotnull = df[df.donation_orgs != []]


dfnotnull = df[df.notnull().any(axis=1)]

And I’ve tried looping through and checking for values that exist, but I think the lists aren’t returning Null or None like I thought they would:

dfnotnull = pd.DataFrame(columns=('donation_orgs', 'donation_context'))
for i in range(0,len(df)):
    if df['donation_orgs'].iloc(i):
        dfnotnull.loc[i] = df.iloc[i]

All three of the above methods simply return every row in the original data frame.=

Asked By: Ben Price



You could try slicing as though the data frame were strings instead of lists:

import pandas as pd
df = pd.DataFrame({
'donation_orgs' : [[], ['the research of Dr.']],
'donation_context': [[], ['In lieu of flowers , memorial donations']]})

df[df.astype(str)['donation_orgs'] != '[]']

                            donation_context          donation_orgs
1  [In lieu of flowers , memorial donations]  [the research of Dr.]
Answered By: Woody Pride

You can use the following one-liner:

df[(df['donation_orgs'].str.len() != 0) | (df['donation_context'].str.len() != 0)]
Answered By: Amir Imani

To avoid converting to str and actually use the lists, you can do this:

df[df['donation_orgs'].map(lambda d: len(d)) > 0]

It maps the donation_orgs column to the length of the lists of each row and keeps only the ones that have at least one element, filtering out empty lists.

It returns

                            donation_context          donation_orgs
1  [In lieu of flowers , memorial donations]  [the research of Dr.]

as expected.

Answered By: Victor

Assuming that you read data from a CSV, the other possible solution could be this:

import pandas as pd

df = pd.read_csv('data.csv', na_filter=True, na_values='[]')

na_filter defines additional string to recognize as NaN. I tested this on pandas-0.24.2.

Answered By: Mark

It’s probably that the data type is different, This will help probably

df[df.astype(str)['donation_orgs'] != '[]']
Answered By: Mohsin Khan
Categories: questions Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.