Write to UTF-8 file in Python
Question:
I’m really confused with the codecs.open function
. When I do:
file = codecs.open("temp", "w", "utf-8")
file.write(codecs.BOM_UTF8)
file.close()
It gives me the error
UnicodeDecodeError: ‘ascii’ codec can’t decode byte 0xef in position
0: ordinal not in range(128)
If I do:
file = open("temp", "w")
file.write(codecs.BOM_UTF8)
file.close()
It works fine.
Question is why does the first method fail? And how do I insert the bom?
If the second method is the correct way of doing it, what the point of using codecs.open(filename, "w", "utf-8")
?
Answers:
I believe the problem is that codecs.BOM_UTF8
is a byte string, not a Unicode string. I suspect the file handler is trying to guess what you really mean based on “I’m meant to be writing Unicode as UTF-8-encoded text, but you’ve given me a byte string!”
Try writing the Unicode string for the byte order mark (i.e. Unicode U+FEFF) directly, so that the file just encodes that as UTF-8:
import codecs
file = codecs.open("lol", "w", "utf-8")
file.write(u'ufeff')
file.close()
(That seems to give the right answer – a file with bytes EF BB BF.)
EDIT: S. Lott’s suggestion of using “utf-8-sig” as the encoding is a better one than explicitly writing the BOM yourself, but I’ll leave this answer here as it explains what was going wrong before.
Read the following: http://docs.python.org/library/codecs.html#module-encodings.utf_8_sig
Do this
with codecs.open("test_output", "w", "utf-8-sig") as temp:
temp.write("hi momn")
temp.write(u"This has ♭")
The resulting file is UTF-8 with the expected BOM.
@S-Lott gives the right procedure, but expanding on the Unicode issues, the Python interpreter can provide more insights.
Jon Skeet is right (unusual) about the codecs
module – it contains byte strings:
>>> import codecs
>>> codecs.BOM
'xffxfe'
>>> codecs.BOM_UTF8
'xefxbbxbf'
>>>
Picking another nit, the BOM
has a standard Unicode name, and it can be entered as:
>>> bom= u"N{ZERO WIDTH NO-BREAK SPACE}"
>>> bom
u'ufeff'
It is also accessible via unicodedata
:
>>> import unicodedata
>>> unicodedata.lookup('ZERO WIDTH NO-BREAK SPACE')
u'ufeff'
>>>
I use the file *nix command to convert a unknown charset file in a utf-8 file
# -*- encoding: utf-8 -*-
# converting a unknown formatting file in utf-8
import codecs
import commands
file_location = "jumper.sub"
file_encoding = commands.getoutput('file -b --mime-encoding %s' % file_location)
file_stream = codecs.open(file_location, 'r', file_encoding)
file_output = codecs.open(file_location+"b", 'w', 'utf-8')
for l in file_stream:
file_output.write(l)
file_stream.close()
file_output.close()
It is very simple just use this. Not any library needed.
with open('text.txt', 'w', encoding='utf-8') as f:
f.write(text)
If you are using Pandas I/O methods like pandas.to_excel(), add an encoding parameter, e.g.
pd.to_excel("somefile.xlsx", sheet_name="export", encoding='utf-8')
This works for most international characters I believe.
python 3.4 >= using pathlib:
import pathlib
pathlib.Path("text.txt").write_text(text, encoding='utf-8') #or utf-8-sig for BOM
def read_files(file_path):
with open(file_path, encoding='utf8') as f:
text = f.read()
return text
**OR (AND)**
def read_files(text, file_path):
with open(file_path, 'rb') as f:
f.write(text.encode('utf8', 'ignore'))
**OR**
document = Document()
document.add_heading(file_path.name, 0)
file_path.read_text(encoding='UTF-8'))
file_content = file_path.read_text(encoding='UTF-8')
document.add_paragraph(file_content)
**OR**
def read_text_from_file(cale_fisier):
text = cale_fisier.read_text(encoding='UTF-8')
print("what I read: ", text)
return text # return written text
def save_text_into_file(cale_fisier, text):
f = open(cale_fisier, "w", encoding = 'utf-8') # open file
print("Ce am scris: ", text)
f.write(text) # write the content to the file
**OR**
def read_text_from_file(file_path):
with open(file_path, encoding='utf8', errors='ignore') as f:
text = f.read()
return text # return written text
**OR**
def write_to_file(text, file_path):
with open(file_path, 'wb') as f:
f.write(text.encode('utf8', 'ignore')) # write the content to the file
I’m really confused with the codecs.open function
. When I do:
file = codecs.open("temp", "w", "utf-8")
file.write(codecs.BOM_UTF8)
file.close()
It gives me the error
UnicodeDecodeError: ‘ascii’ codec can’t decode byte 0xef in position
0: ordinal not in range(128)
If I do:
file = open("temp", "w")
file.write(codecs.BOM_UTF8)
file.close()
It works fine.
Question is why does the first method fail? And how do I insert the bom?
If the second method is the correct way of doing it, what the point of using codecs.open(filename, "w", "utf-8")
?
I believe the problem is that codecs.BOM_UTF8
is a byte string, not a Unicode string. I suspect the file handler is trying to guess what you really mean based on “I’m meant to be writing Unicode as UTF-8-encoded text, but you’ve given me a byte string!”
Try writing the Unicode string for the byte order mark (i.e. Unicode U+FEFF) directly, so that the file just encodes that as UTF-8:
import codecs
file = codecs.open("lol", "w", "utf-8")
file.write(u'ufeff')
file.close()
(That seems to give the right answer – a file with bytes EF BB BF.)
EDIT: S. Lott’s suggestion of using “utf-8-sig” as the encoding is a better one than explicitly writing the BOM yourself, but I’ll leave this answer here as it explains what was going wrong before.
Read the following: http://docs.python.org/library/codecs.html#module-encodings.utf_8_sig
Do this
with codecs.open("test_output", "w", "utf-8-sig") as temp:
temp.write("hi momn")
temp.write(u"This has ♭")
The resulting file is UTF-8 with the expected BOM.
@S-Lott gives the right procedure, but expanding on the Unicode issues, the Python interpreter can provide more insights.
Jon Skeet is right (unusual) about the codecs
module – it contains byte strings:
>>> import codecs
>>> codecs.BOM
'xffxfe'
>>> codecs.BOM_UTF8
'xefxbbxbf'
>>>
Picking another nit, the BOM
has a standard Unicode name, and it can be entered as:
>>> bom= u"N{ZERO WIDTH NO-BREAK SPACE}"
>>> bom
u'ufeff'
It is also accessible via unicodedata
:
>>> import unicodedata
>>> unicodedata.lookup('ZERO WIDTH NO-BREAK SPACE')
u'ufeff'
>>>
I use the file *nix command to convert a unknown charset file in a utf-8 file
# -*- encoding: utf-8 -*-
# converting a unknown formatting file in utf-8
import codecs
import commands
file_location = "jumper.sub"
file_encoding = commands.getoutput('file -b --mime-encoding %s' % file_location)
file_stream = codecs.open(file_location, 'r', file_encoding)
file_output = codecs.open(file_location+"b", 'w', 'utf-8')
for l in file_stream:
file_output.write(l)
file_stream.close()
file_output.close()
It is very simple just use this. Not any library needed.
with open('text.txt', 'w', encoding='utf-8') as f:
f.write(text)
If you are using Pandas I/O methods like pandas.to_excel(), add an encoding parameter, e.g.
pd.to_excel("somefile.xlsx", sheet_name="export", encoding='utf-8')
This works for most international characters I believe.
python 3.4 >= using pathlib:
import pathlib
pathlib.Path("text.txt").write_text(text, encoding='utf-8') #or utf-8-sig for BOM
def read_files(file_path):
with open(file_path, encoding='utf8') as f:
text = f.read()
return text
**OR (AND)**
def read_files(text, file_path):
with open(file_path, 'rb') as f:
f.write(text.encode('utf8', 'ignore'))
**OR**
document = Document()
document.add_heading(file_path.name, 0)
file_path.read_text(encoding='UTF-8'))
file_content = file_path.read_text(encoding='UTF-8')
document.add_paragraph(file_content)
**OR**
def read_text_from_file(cale_fisier):
text = cale_fisier.read_text(encoding='UTF-8')
print("what I read: ", text)
return text # return written text
def save_text_into_file(cale_fisier, text):
f = open(cale_fisier, "w", encoding = 'utf-8') # open file
print("Ce am scris: ", text)
f.write(text) # write the content to the file
**OR**
def read_text_from_file(file_path):
with open(file_path, encoding='utf8', errors='ignore') as f:
text = f.read()
return text # return written text
**OR**
def write_to_file(text, file_path):
with open(file_path, 'wb') as f:
f.write(text.encode('utf8', 'ignore')) # write the content to the file