Read file in chunks – RAM-usage, reading strings from binary files

Question:

I’d like to understand the difference in RAM-usage of this methods when reading a large file in python.

Version 1, found here on stackoverflow:

def read_in_chunks(file_object, chunk_size=1024):
    while True:
        data = file_object.read(chunk_size)
        if not data:
            break
        yield data

f = open(file, 'rb')
for piece in read_in_chunks(f):
    process_data(piece)
f.close()

Version 2, I used this before I found the code above:

f = open(file, 'rb')
while True:
    piece = f.read(1024)
    process_data(piece)
f.close()

The file is read partially in both versions. And the current piece could be processed. In the second example, piece is getting new content on every cycle, so I thought this would do the job without loading the complete file into memory.

But I don’t really understand what yield does, and I’m pretty sure I got something wrong here. Could anyone explain that to me?


There is something else that puzzles me, besides of the method used:

The content of the piece I read is defined by the chunk-size, 1KB in the examples above. But… what if I need to look for strings in the file? Something like "ThisIsTheStringILikeToFind"?

Depending on where in the file the string occurs, it could be that one piece contains the part "ThisIsTheStr" – and the next piece would contain "ingILikeToFind". Using such a method it’s not possible to detect the whole string in any piece.

Is there a way to read a file in chunks – but somehow care about such strings?

Asked By: xph

||

Answers:

yield is the keyword in python used for generator expressions. That means that the next time the function is called (or iterated on), the execution will start back up at the exact point it left off last time you called it. The two functions behave identically; the only difference is that the first one uses a tiny bit more call stack space than the second. However, the first one is far more reusable, so from a program design standpoint, the first one is actually better.

EDIT: Also, one other difference is that the first one will stop reading once all the data has been read, the way it should, but the second one will only stop once either f.read() or process_data() throws an exception. In order to have the second one work properly, you need to modify it like so:

f = open(file, 'rb')
while True:
    piece = f.read(1024)  
    if not piece:
        break
    process_data(piece)
f.close()
Answered By: AJMansfield

I think probably the best and most idiomatic way to do this would be to use the built-in iter() function along with its optional sentinel argument to create and use an iterable as shown below. Note that the last chunk might be less that the requested chunk size if the file size isn’t an exact multiple of it.

from functools import partial

CHUNK_SIZE = 1024
filename = 'testfile.dat'

with open(filename, 'rb') as file:
    for chunk in iter(partial(file.read, CHUNK_SIZE), b''):
        process_data(chunk)

Update: Don’t know when it was added, but almost exactly what’s above is in now shown as an example in the official documentation of the iter() function.

Answered By: martineau

starting from python 3.8 you might also use an assignment expression (the walrus-operator):

with open('file.name', 'rb') as file:
    while chunk := file.read(1024):
        process_data(chunk)

the last chunk may be smaller than CHUNK_SIZE.

as read() will return b"" when the file has been read the while loop will terminate.

Answered By: hiro protagonist

— Adding on to the answer given —

When i was reading file in chunk let’s suppose a text file with the name of split.txt the issue i was facing while reading in chunks was I had a use case where i was processing the data line by line and just because the text file i was reading in chunks it(chunk of file) sometimes end with partial lines that end up breaking my code(since it was expecting the complete line to be processed)

so after reading here and there I came to know I can overcome this issue by keeping a track of the last bit in the chunk so what I did was if the chunk has a /n in it that means the chunk consists of a complete line otherwise I usually store the partial last line and keep it in a variable so that I can use this bit and concatenate it with the next unfinished line coming in the next chunk with this I successfully able to get over this issue.

sample code :-

# in this function i am reading the file in chunks
def read_in_chunks(file_object, chunk_size=1024):
    """Lazy function (generator) to read a file piece by piece.
    Default chunk size: 1k."""
    while True:
        data = file_object.read(chunk_size)
        if not data:
            break
        yield data

# file where i am writing my final output
write_file=open('split.txt','w')

# variable i am using to store the last partial line from the chunk
placeholder= ''
file_count=1

try:
    with open('/Users/rahulkumarmandal/Desktop/combined.txt') as f:
        for piece in read_in_chunks(f):
            #print('---->>>',piece,'<<<--')
            line_by_line = piece.split('n')

            for one_line in line_by_line:
                # if placeholder exist before that means last chunk have a partial line that we need to concatenate with the current one 
                if placeholder:
                    # print('----->',placeholder)
                    # concatinating the previous partial line with the current one
                    one_line=placeholder+one_line
                    # then setting the placeholder empty so that next time if there's a partial line in the chunk we can place it in the variable to be concatenated further
                    placeholder=''
                
                # futher logic that revolves around my specific use case
                segregated_data= one_line.split('~')
                #print(len(segregated_data),type(segregated_data), one_line)
                if len(segregated_data) < 18:
                    placeholder=one_line
                    continue
                else:
                    placeholder=''
                #print('--------',segregated_data)
                if segregated_data[2]=='2020' and segregated_data[3]=='2021':
                    #write this
                    data=str("~".join(segregated_data))
                    #print('data',data)
                    #f.write(data)
                    write_file.write(data)
                    write_file.write('n')
                    print(write_file.tell())
                elif segregated_data[2]=='2021' and segregated_data[3]=='2022':
                    #write this
                    data=str("-".join(segregated_data))
                    write_file.write(data)
                    write_file.write('n')
                    print(write_file.tell())
except Exception as e:
    print('error is', e)                
Answered By: officialrahulmandal
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.