Prevent pandas from interpreting 'NA' as NaN in a string

Question:

The pandas read_csv() method interprets ‘NA’ as nan (not a number) instead of a valid string.

In the simple case below note that the output in row 1, column 2 (zero based count) is ‘nan’ instead of ‘NA’.

sample.tsv (tab delimited)

PDB CHAIN SP_PRIMARY RES_BEG RES_END PDB_BEG PDB_END SP_BEG SP_END
5d8b N P60490 1 146 1 146 1 146
5d8b NA P80377 1 126 1 126 1 126
5d8b O P60491 1 118 1 118 1 118

read_sample.py

import pandas as pd

df = pd.read_csv(
    'sample.tsv',
    sep='t',
    encoding='utf-8',
)

for df_tuples in df.itertuples(index=True):
    print(df_tuples)

output

(0, u’5d8b’, u’N’, u’P60490′, 1, 146, 1, 146, 1, 146)
(1, u’5d8b’, nan, u’P80377′, 1, 126, 1, 126, 1, 126)
(2, u’5d8b’, u’O’, u’P60491′, 1, 118, 1, 118, 1, 118)

Additional Information

Re-writing the file with quotes for data in the ‘CHAIN’ column and then using the quotechar parameter quotechar=''' has the same result. And passing a dictionary of types via the dtype parameter dtype=dict(valid_cols) does not change the result.

An old answer to Prevent pandas from automatically inferring type in read_csv suggests first using a numpy record array to parse the file, but given the ability to now specify column dtypes, this shouldn’t be necessary.

Note that itertuples() is used to preserve dtypes as described in the iterrows documentation: “To preserve dtypes while iterating over the rows, it is better to use itertuples() which returns tuples of the values and which is generally faster as iterrows.”

Example was tested on Python 2 and 3 with pandas version 0.16.2, 0.17.0, and 0.17.1.


Is there a way to capture a valid string ‘NA’ instead of it being converted to nan?

Asked By: binarysubstrate

||

Answers:

You could use parameters keep_default_na and na_values to set all NA values by hand docs:

import pandas as pd
from io import StringIO

data = """
PDB CHAIN SP_PRIMARY RES_BEG RES_END PDB_BEG PDB_END SP_BEG SP_END
5d8b N P60490 1 146 1 146 1 146
5d8b NA P80377 _ 126 1 126 1 126
5d8b O P60491 1 118 1 118 1 118
"""

df = pd.read_csv(StringIO(data), sep=' ', keep_default_na=False, na_values=['_'])

In [130]: df
Out[130]:
    PDB CHAIN SP_PRIMARY  RES_BEG  RES_END  PDB_BEG  PDB_END  SP_BEG  SP_END
0  5d8b     N     P60490        1      146        1      146       1     146
1  5d8b    NA     P80377      NaN      126        1      126       1     126
2  5d8b     O     P60491        1      118        1      118       1     118

In [144]: df.CHAIN.apply(type)
Out[144]:
0    <class 'str'>
1    <class 'str'>
2    <class 'str'>
Name: CHAIN, dtype: object

EDIT

All default NA values from na-values (as of pandas 1.0.0):

The default NaN recognized values are [‘-1.#IND’, ‘1.#QNAN’, ‘1.#IND’, ‘-1.#QNAN’, ‘#N/A N/A’, ‘#N/A’, ‘N/A’, ‘n/a’, ‘NA’, ”, ‘#NA’, ‘NULL’, ‘null’, ‘NaN’, ‘-NaN’, ‘nan’, ‘-nan’, ”].

Answered By: Anton Protopopov

For me solution came from using parameter na_filter = False

df = pd.read_csv(file_, header=0, dtype=object, na_filter = False)
Answered By: Matthew Coelho

Setting keep_default_na parameter does the trick.

Here is an example of keeping NA as string value while reading CSV file using Pandas.

data.csv:

country_name,country_code
Mexico,MX
Namibia,NA

read_data.py:

import pandas as pd
data = pd.read_csv("data.csv", keep_default_na=False)
print(data.describe())
print(data)

Output:

       country_name country_code
count             2            2
unique            2            2
top         Namibia           MX
freq              1            1

  country_name country_code
0       Mexico           MX
1      Namibia           NA

Reference:

Answered By: arsho

This approach work for me:

import pandas as pd
df = pd.read_csv('Test.csv')
co1 col2  col3  col4

a   b    c  d   e
NaN NaN NaN NaN NaN
2   3   4   5   NaN

I copied the value and created a list which are by default interpreted as NaN then comment out NA which I wanted to be interpreted as not NaN. This approach still treat other values as NaN except for NA.

 na_values = ["", 
             "#N/A", 
             "#N/A N/A", 
             "#NA", 
             "-1.#IND", 
             "-1.#QNAN", 
             "-NaN", 
             "-nan", 
             "1.#IND", 
             "1.#QNAN", 
             "<NA>", 
             "N/A", 
#              "NA", 
             "NULL", 
             "NaN", 
             "n/a", 
             "nan", 
             "null"]

df1 = pd.read_csv('Test.csv',na_values=na_values,keep_default_na=False )

      co1  col2  col3  col4
a     b     c     d     e
NaN  NA   NaN    NA   NaN
2     3     4     5   NaN
Answered By: Suman Shrestha

While reading the file using pandas you can use the parameter na_filter = False or keep_default_na=False in that line

import pandas as pd

df = pd.read_csv('sample.tsv',sep='t',encoding='utf-8',na_filter = False)

for df_tuples in df.itertuples(index=True):
    print(df_tuples)
Answered By: Mohanraj M

Building off of Anton Protopopov‘s answer, a clean way to minimally modify the default values (i.e. remove the values you don’t want parsed as NaN and add those that you do):

from pandas._libs.parsers import STR_NA_VALUES

accepted_na_values = STR_NA_VALUES - {'NA'} | {'_'}
path = 'myexcel.xlsx'
df = pd.read_excel(path, keep_default_na=False, na_values=accepted_na_values)
Answered By: Jon
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.