Pandas, read CSV ignoring extra commas

Question:

I am reading a CSV file with 8 columns into Pandas data frame. The final column contains an error message, some of which contain commas. This causes the file read to fail with the error ParserError: Error tokenizing data. C error: Expected 8 fields in line 21922, saw 9

Is there a way to ignore all commas after the 8th field, rather than having to go through the file and remove excess commas?

Code to read file:

import pandas as pd
df = pd.read_csv('C:\somepath\output.csv')

Line that works:

061AE,Active,001,2017_02_24 15_18_01,00006,1,00013,some message

Line that fails:

061AE,Active,001,2017_02_24 15_18_01,00006,1,00013,longer message, with commas
Asked By: MikeS159

||

Answers:

You can use the parameter usecols in the read_csv function to limit what columns you read in. For example:

import pandas as pd
pd.read_csv(path, usecols=range(8))

if you only want to read the first 8 columns.

Answered By: Blazina

You can use re.sub to replace the first few commas with, say, the ‘|’, save the intermediate results in a StringIO then process that.

import pandas as pd
from io import StringIO
import re

for_pd = StringIO()
with open('MikeS159.csv') as mike:
    for line in mike:
        new_line = re.sub(r',', '|', line.rstrip(), count=7)
        print (new_line, file=for_pd)

for_pd.seek(0)

df = pd.read_csv(for_pd, sep='|', header=None)
print (df)

I put the two lines from your question into a file to get this output.

       0       1  2                    3  4  5   6  
0  061AE  Active  1  2017_02_24 15_18_01  6  1  13   
1  061AE  Active  1  2017_02_24 15_18_01  6  1  13   

                             7  
0                 some message  
1  longer message, with commas  
Answered By: Bill Bell

You can take a shot at this roundabout posted on the Pandas issues page:

import csv
import pandas as pd
import numpy as np

df = pd.read_csv('filename.csv', parse_dates=True, dtype=Object, delimiter="t", quoting=csv.QUOTE_NONE, encoding='utf-8')

You can also preprocess the data, basically changing all first 7 (0th to 6th, both inclusive) commas to semicolons, and leaving the ones after that as commas* using something like:

to_write = []
counter = 0
with open("sampleCSV.csv", "r") as f:
    for line in f:
        while counter < 7:
            line = list(line)
            line[line.index(",")] = ";"
            counter += 1
        counter = 0
        to_write.append("".join(line))

You can now read this to_write list as a Pandas object like

data = pd.DataFrame(to_write)
data = pd.DataFrame(data[0].str.split(";").values.tolist()),

or write it back into a csv and read using pandas with a semicolon delimiter such as read_csv(csv_path, sep=';').

I kinda drafted this quickly without rigorous testing, but should give you some ideas to try. Please comment if it does or doesn’t help, and I’ll edit it.

*Another option is to delete all commas after 7th, and keep using the comma separator. Either way the point is to differentiate the first 7 delimiters from the subsequent punctuation.

Answered By: FatihAkici

to join @Tblaz answer If you use GoogleColab you can use this solution, in my case the extra comma was on the column 24 so I have only to read 23 columns:

import pandas as pd
from google.colab import files
import io
uploaded = files.upload()
x_train = pd.read_csv(io.StringIO(uploaded['x_train.csv'].decode('utf-8')), skiprows=1, usecols=range(23) ,header=None)
Answered By: DINA TAKLIT
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.