How can we figure out why certain uuencoded files are not decoding properly using Python?

Question:

We are trying to decode some uuencoded PDF files that are embedded in a txt file.

The problem we have is that most of the PDF files decoded just fine using Python’s uuencode library. Here is the code:

try:
    decoded_file,m=uudecode(fileString)
except:
    decoded_file=''

However, some of the files cannot be opened after they are decoded. We receive the message “There was an error opening this document. The file is damaged and could not be repaired.”

The only thing we could find on Google is that our files could’ve been encoded using base64 and the Python uuencoding module only supports base32. Is there a way that we could tell whether it was uuencoded using base64 or base32?

Here is an example of a txt file that had an embedded uuencoded pdf that we successfully decoded:
http://www.sec.gov/Archives/edgar/data/1108046/000000000011020832/0000000000-11-020832.txt

And here is an example of one that failed:
http://www.sec.gov/Archives/edgar/data/914257/000000000011005978/0000000000-11-005978.txt

While we are decoding these in Python no errors pop up of any kind and everything seems to be working as it should. What could be causing them to not decode properly? Is there a way we could flag this while we are processing them?

Asked By: Colby

||

Answers:

>>> uu.decode(open('0000000000-11-005978.txt'))
Warning: Trailing garbage

The source data itself is damaged. This is further evidenced by the .. at the beginning of a line near the end.

$ python -c "import urllib2; print len(urllib2.urlopen('http://www.sec.gov/Archives/edgar/data/914257/000000000011005978/0000000000-11-005978.txt').read().decode('uu'))"
43124

works just fine.

Answered By: pyroscope

I know I’m late to the party, but I’ve discovered at least one source of bugs encountered when parsing the UUEncoded text in these filings.

I wrote a github issue describing the problem and provided a reasonable workaround. It seems to be the python implementation wrongly assumes padding characters are always whitespace.

Here’s the workaround:

import binascii
from binascii import a2b_uu
from io import BytesIO

my_bytes = BytesIO()
line_bytes = b'..#0HQ,38-"B4E14]&#0H_'
line = line_bytes.decode(encoding='ascii')
try:
    my_bytes.write(a2b_uu(line))
except binascii.Error as err:
    if 'trailing garbage' in str(err).lower():
        n_bytes = line_bytes[0] - 32
        assert n_bytes <= 45 and n_bytes <= len(line[1:])
        workaround_line = f'M{line[1:]}'  # replace first byte of UUEncoded line with max length specifier (M)
        data = a2b_uu(workaround_line)[:n_bytes]
        print('', workaround_line, data)
        my_bytes.write(data)
    else:
        raise err

Answered By: Alexander Medeiros
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.