Convert non UTF-8 ASCII literals in otherwise UTF-8 text to their respective character

Question:

I have a UTF8 encoded text that has been mangled and contains some ‘cp1252’ ASCII literals.
I am trying to isolate the literals and convert them one by one, however following code does not work and I can’t understand why…

text = "This text contains some ASCII literal codes like x9a and x9e."

# Find all ASCII literal codes in the text
codes = re.findall(r'\x[0-9a-fA-F]{2}', text)

# Replace each ASCII literal code with its decoded character
for code in codes:
    char = bytes(code, 'ascii').decode('cp1252')
    text = text.replace(code, char)

print(text)
Asked By: Markus

||

Answers:

No regex needed. Encoding in latin1 converts 1:1 from Unicode code points U+0000 to U+00FF to bytes b'x00' to b'xff'. Then decode correctly:

>>> text = "This text contains some ASCII literal codes like x9a and x9e."
>>> text.encode('latin1').decode('cp1252')
'This text contains some ASCII literal codes like š and ž.'

The text was probably decoded as ISO-8859-1 (another name for Latin-1) in the first place. Ideally fix the that code to decode as cp1252.

Answered By: Mark Tolonen
Categories: questions Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.