Bytes in a unicode Python string

Question:

In Python 2, Unicode strings may contain both unicode and bytes:

a = u'u0420u0443u0441u0441u043au0438u0439 xd0xb5xd0xba'

I understand that this is absolutely not something one should write in his own code, but this is a string that I have to deal with.

The bytes in the string above are UTF-8 for ек (Unicode u0435u043a).

My objective is to get a unicode string containing everything in Unicode, which is to say Русский ек (u0420u0443u0441u0441u043au0438u0439 u0435u043a).

Encoding it to UTF-8 yields

>>> a.encode('utf-8')
'xd0xa0xd1x83xd1x81xd1x81xd0xbaxd0xb8xd0xb9 xc3x90xc2xb5xc3x90xc2xba'

Which then decoded from UTF-8 gives the initial string with bytes in them, which is not good:

>>> a.encode('utf-8').decode('utf-8')
u'u0420u0443u0441u0441u043au0438u0439 xd0xb5xd0xba'

I found a hacky way to solve the problem, however:

>>> repr(a)
"u'\u0420\u0443\u0441\u0441\u043a\u0438\u0439 \xd0\xb5\xd0\xba'"
>>> eval(repr(a)[1:])
'\u0420\u0443\u0441\u0441\u043a\u0438\u0439 xd0xb5xd0xba'
>>> s = eval(repr(a)[1:]).decode('utf8')
>>> s
u'\u0420\u0443\u0441\u0441\u043a\u0438\u0439 u0435u043a'
# Almost there, the bytes are proper now but the former real-unicode characters
# are now escaped with u's; need to un-escape them.
>>> import re
>>> re.sub(u'\\u([a-f\d]+)', lambda x : unichr(int(x.group(1), 16)), s)
u'u0420u0443u0441u0441u043au0438u0439 u0435u043a' # Success!

This works fine but looks very hacky due to its use of eval, repr, and then additional regex’ing of the unicode string representation. Is there a cleaner way?

Asked By: Etienne Perot

||

Answers:

The problem is that your string is not actually encoded in a specific encoding. Your example string:

a = u'u0420u0443u0441u0441u043au0438u0439 xd0xb5xd0xba'

Is mixing python’s internal representation of unicode strings with utf-8 encoded text. If we just consider the ‘special’ characters:

>>> orig = u'u0435u043a'
>>> bytes = u'xd0xb5xd0xba'
>>> print orig
ек
>>> print bytes
ек

But you say, bytes is utf-8 encoded:

>>> print bytes.encode('utf-8')
ек
>>> print bytes.encode('utf-8').decode('utf-8')
ек

Wrong! But what about:

>>> bytes = 'xd0xb5xd0xba'
>>> print bytes
ек
>>> print bytes.decode('utf-8')
ек

Hurrah.

So. What does this mean for me? It means you’re (probably) solving the wrong problem. What you should be asking us/trying to figure out is why your strings are in this form to begin with and how to avoid it/fix it before you have them all mixed up.

Answered By: beerbajay

You should convert unichrs to chrs, then decode them.

u'xd0' == u'u00d0' is True

$ python
>>> import re
>>> a = u'u0420u0443u0441u0441u043au0438u0439 xd0xb5xd0xba'
>>> re.sub(r'[00-377]*', lambda m:''.join([chr(ord(i)) for i in m.group(0)]).decode('utf8'), a)
u'u0420u0443u0441u0441u043au0438u0439 u0435u043a'
  • r'[00-377]*' will match unichrs u'[u0000-u00ff]*'
  • u'xd0xb5xd0xba' == u'u00d0u00b5u00d0u00ba'
  • You use utf8 encoded bytes as unicode code points (this is the PROBLEM)
  • I solve the problem by pretending those mistaken unichars as the corresponding bytes
  • I search all these mistaken unichars, and convert them to chars, then decode them.

If I’m wrong, please tell me.

Answered By: kev

In Python 2, Unicode strings may contain both unicode and bytes:

No, they may not. They contain Unicode characters.

Within the original string, xd0 is not a byte that’s part of a UTF-8 encoding. It is the Unicode character with code point 208. u'xd0' == u'u00d0'. It just happens that the repr for Unicode strings in Python 2 prefers to represent characters with x escapes where possible (i.e. code points < 256).

There is no way to look at the string and tell that the xd0 byte is supposed to be part of some UTF-8 encoded character, or if it actually stands for that Unicode character by itself.

However, if you assume that you can always interpret those values as encoded ones, you could try writing something that analyzes each character in turn (use ord to convert to a code-point integer), decodes characters < 256 as UTF-8, and passes characters >= 256 as they were.

Answered By: Karl Knechtel

(In response to the comments above): this code converts everything that looks like utf8 and leaves other codepoints as is:

a = u'u0420u0443u0441 utf:xd0xb5xd0xba bytes:blxe4xe4'

def convert(s):
    try:
        return s.group(0).encode('latin1').decode('utf8')
    except:
        return s.group(0)

import re
a = re.sub(r'[x80-xFF]+', convert, a)
print a.encode('utf8')   

Result:

Рус utf:ек bytes:blää  
Answered By: georg

You’ve already got an answer, but here’s a way to unscramble UTF-8-like Unicode sequences that is less likely to decode latin-1 Unicode sequences in error. The re.sub function:

  1. Matches Unicode characters < U+0100 that resemble valid UTF-8 sequences (ref: RFC 3629).
  2. Encodes the Unicode sequence into its equivalent latin-1 byte sequence.
  3. Decodes the sequence using UTF-8 back into Unicode.
  4. Replaces the original UTF-8-like sequence with the matching Unicode character.

Note this could still match a Unicode sequence if just the right characters appear next to each other, but it is much less likely.

import re

# your example
a = u'u0420u0443u0441u0441u043au0438u0439 xd0xb5xd0xba'

# printable Unicode characters < 256.
a += ''.join(chr(n) for n in range(32,256)).decode('latin1')

# a few UTF-8 characters decoded as latin1.
a += ''.join(unichr(n) for n in [2**7-1,2**7,2**11-1,2**11]).encode('utf8').decode('latin1')

# Some non-BMP characters
a += u'U00010000U0010FFFF'.encode('utf8').decode('latin1')

print repr(a)

# Unicode codepoint sequences that resemble UTF-8 sequences.
p = re.compile(ur'''(?x)
    xF0[x90-xBF][x80-xBF]{2} |  # Valid 4-byte sequences
        [xF1-xF3][x80-xBF]{3} |
    xF4[x80-x8F][x80-xBF]{2} |

    xE0[xA0-xBF][x80-xBF]    |  # Valid 3-byte sequences
        [xE1-xEC][x80-xBF]{2} |
    xED[x80-x9F][x80-xBF]    |
        [xEE-xEF][x80-xBF]{2} |

    [xC2-xDF][x80-xBF]           # Valid 2-byte sequences
    ''')

def replace(m):
    return m.group(0).encode('latin1').decode('utf8')

print
print repr(p.sub(replace,a))

###Output

u’u0420u0443u0441u0441u043au0438u0439 xd0xb5xd0xba
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[]^_`abcdefghijklmnopqrstuvwxyz{|}~x7fx80x81x82x83x84x85x86x87x88x89x8ax8bx8cx8dx8ex8fx90x91x92x93x94x95x96x97x98x99x9ax9bx9cx9dx9ex9fxa0xa1xa2xa3xa4xa5xa6xa7xa8xa9xaaxabxacxadxaexafxb0xb1xb2xb3xb4xb5xb6xb7xb8xb9xbaxbbxbcxbdxbexbfxc0xc1xc2xc3xc4xc5xc6xc7xc8xc9xcaxcbxccxcdxcexcfxd0xd1xd2xd3xd4xd5xd6xd7xd8xd9xdaxdbxdcxddxdexdfxe0xe1xe2xe3xe4xe5xe6xe7xe8xe9xeaxebxecxedxeexefxf0xf1xf2xf3xf4xf5xf6xf7xf8xf9xfaxfbxfcxfdxfexffx7fxc2x80xdfxbfxe0xa0x80xf0x90x80x80xf4x8fxbfxbf

u’u0420u0443u0441u0441u043au0438u0439 u0435u043a
!"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[]^_`abcdefghijklmnopqrstuvwxyz{|}~x7fx80x81x82x83x84x85x86x87x88x89x8ax8bx8cx8dx8ex8fx90x91x92x93x94x95x96x97x98x99x9ax9bx9cx9dx9ex9fxa0xa1xa2xa3xa4xa5xa6xa7xa8xa9xaaxabxacxadxaexafxb0xb1xb2xb3xb4xb5xb6xb7xb8xb9xbaxbbxbcxbdxbexbfxc0xc1xc2xc3xc4xc5xc6xc7xc8xc9xcaxcbxccxcdxcexcfxd0xd1xd2xd3xd4xd5xd6xd7xd8xd9xdaxdbxdcxddxdexdfxe0xe1xe2xe3xe4xe5xe6xe7xe8xe9xeaxebxecxedxeexefxf0xf1xf2xf3xf4xf5xf6xf7xf8xf9xfaxfbxfcxfdxfexffx7fx80u07ffu0800U00010000U0010ffff

Answered By: Mark Tolonen

I solved it by

unicodeText.encode("utf-8").decode("unicode-escape").encode("latin1")
Answered By: Tahirhan