"for line in…" results in UnicodeDecodeError: 'utf-8' codec can't decode byte


Here is my code,

for line in open('u.item'):
# Read each line

Whenever I run this code it gives the following error:

UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0xe9 in position 2892: invalid continuation byte

I tried to solve this and add an extra parameter in open(). The code looks like:

for line in open('u.item', encoding='utf-8'):
# Read each line

But again it gives the same error. What should I do then?

Asked By: SujitS



Your file doesn’t actually contain UTF-8 encoded data; it contains some other encoding. Figure out what that encoding is and use it in the open call.

In Windows-1252 encoding, for example, the 0xe9 would be the character é.

Answered By: Mark Ransom

As suggested by Mark Ransom, I found the right encoding for that problem. The encoding was "ISO-8859-1", so replacing open("u.item", encoding="utf-8") with open('u.item', encoding = "ISO-8859-1") will solve the problem.

Answered By: SujitS

This is an example for converting a CSV file in Python 3:

    inputReader = csv.reader(open(argv[1], encoding='ISO-8859-1'), delimiter=',',quotechar='"')
except IOError:
Answered By: user6832484

Try this to read using Pandas:

pd.read_csv('u.item', sep='|', names=m_cols, encoding='latin-1')
Answered By: Shashank

If you are using Python 2, the following will be the solution:

import io
for line in io.open("u.item", encoding="ISO-8859-1"):
    # Do something

Because the encoding parameter doesn’t work with open(), you will be getting the following error:

TypeError: ‘encoding’ is an invalid keyword argument for this function

Answered By: Jeril

The following also worked for me. ISO 8859-1 is going to save a lot, mainly if using Speech Recognition APIs.


file = open('../Resources/' + filename, 'r', encoding="ISO-8859-1")
Answered By: Ryoji Kuwae Neto

Sometimes when using open(filepath) in which filepath actually is not a file would get the same error, so firstly make sure the file you’re trying to open exists:

import os
assert os.path.isfile(filepath)
Answered By: xtluo

You could resolve the problem with:

for line in open(your_file_path, 'rb'):

‘rb’ is reading the file in binary mode. Read more here.

Answered By: Ozcar Nguyen

This works:

open('filename', encoding='latin-1')


open('filename', encoding="ISO-8859-1")
Answered By: Ayesha Siddiqa

You can try this way:

open('u.item', encoding='utf8', errors='ignore')
Answered By: Farid Chowdhury

Open your file with Notepad++, select "Encoding" or "Encodage" menu to identify or to convert from ANSI to UTF-8 or the ISO 8859-1 code page.

Answered By: JGaber

So that the web-page is searched faster for the google-request on a similar question (about error with UTF-8), I leave my solvation here for others.

I had problem with .csv file opening with that description:

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe9 in position 150: invalid continuation byte

I opened the file with NotePad & counted 150th position: that was a Cyrillic symbol.
I resaved that file with ‘Save as..’ command with Encoding ‘UTF-8’ & my program started to work.

Answered By: Nikita Axenov

Based on another question on Stackoverflow and previous answers in this post, I would like to add a help to find the right encoding.

If your script runs on a Linux OS, you can get the encoding with the file command:

file --mime-encoding <filename>

Here is a python script to do that for you:

import sys
import subprocess

if len(sys.argv) < 2:
    print("Usage: {} <filename>".format(sys.argv[0]))

def find_encoding(fname):
    """Find the encoding of a file using file command

    # find fullname of file command
    which_run = subprocess.run(['which', 'file'], stdout=subprocess.PIPE)
    if which_run.returncode != 0:
        print("Unable to find 'file' command ({})".format(which_run.returncode))
        return None

    file_cmd = which_run.stdout.decode().replace('n', '')

    # run file command to get MIME encoding
    file_run = subprocess.run([file_cmd, '--mime-encoding', fname],
    if file_run.returncode != 0:
        print(file_run.stderr.decode(), file=sys.stderr)

    # return  encoding name only
    return file_run.stdout.decode().split()[1]

# test
print("Encoding of {}: {}".format(sys.argv[1], find_encoding(sys.argv[1])))
Answered By: Alain Cherpin

I was using a dataset downloaded from Kaggle while reading this dataset it threw this error:

UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0xf1 in position
183: invalid continuation byte

So this is how I fixed it.

import pandas as pd

pd.read_csv('top50.csv', encoding='ISO-8859-1')

Answered By: Vineet Singh

The encoding replaced with encoding=’ISO-8859-1′

for line in open(‘u.item’, encoding=’ISO-8859-1′):


Answered By: Anoop Ashware

Use this, if you are directly loading data from github or kaggle DF=pd.read_csv(file,encoding=’ISO-8859-1′)

Answered By: SONY ANNEM

UnicodeDecodeError: ‘utf-8’ codec can’t decode byte 0xed in position 7044: invalid continuation byte

The above error is occuring due to encoding

Solution:- Use “encoding=’latin-1’”

Reference:- https://pandas.pydata.org/docs/search.html?q=encoding

Answered By: Kalluri

I keep coming across this error and often the solution is not resolved by encoding='utf-8' but in fact with engine='python' like this:

import pandas as pd

file = "c:\path\to_my\file.csv"
df = pd.read_csv(file, engine='python')

A link to the docs is here:


Answered By: D.L

In my case, this issue occurred because I modified the extension of an excel file (.xlsx) directly into a (.csv) file directly…

The solution was to open the file then save it as new (.csv) file (i.e. file -> save as -> select the (.csv) extension and save it. This worked for me.

Answered By: afrah

My issue was similar in that UTF-8 text was getting passed to the Python script.

In my case, it was from SQL using the sp_execute_external_script in the Machine Learning service for SQL Server. For whatever reason, VARCHAR data appears to get passed as UTF-8, whereas NVARCHAR data gets passed as UTF-16.

Since there’s no way to specify the default encoding in Python, and no user-editable Python statement parsing the data, I had to use the SQL CONVERT() function in my SELECT query in the @input_data parameter.

So, while this query

EXEC sp_execute_external_script @language = N'Python', 
@script = N'
OutputDataSet = InputDataSet
@input_data_1 = N'SELECT id, text FROM the_error;'
WITH RESULT SETS (([id] int, [text] nvarchar(max)));

gives the error

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xc7 in position 0: unexpected end of data

Using CONVERT(type, data) (CAST(data AS type) would also work)

EXEC sp_execute_external_script @language = N'Python', 
@script = N'
OutputDataSet = InputDataSet
@input_data_1 = N'SELECT id, CONVERT(NVARCHAR(max), text) FROM the_error;'
WITH RESULT SETS (([id] INT, [text] NVARCHAR(max)));


id  text
1   Ç
Answered By: Mark Smith