Python fastest way to read a large number of small files into memory?

Question:

I’m trying to read a few thousands html files stored on disk.

Is there any way to do better than;

for files in os.listdir('.'):
    if files.endswith('.html') :
        with (open) files as f:
            a=f.read()
            #do more stuffs
Asked By: DJJ

||

Answers:

For a similar problem I have used this simple piece of code:

import glob
for file in glob.iglob("*.html"):
    with open(file) as f:
        a = f.read()

iglob doesn’t stores all file simultaneously, this is perfect with a huge directory.
Remenber to close files after you have finished, the construct “with-open” make sure for you.

Answered By: Thomas8

Here’s some code that’s significantly faster than with open(...) as f: f.read()

def read_file_bytes(path: str, size=-1) -> bytes:
    fd = os.open(path, os.O_RDONLY)
    try:
        if size == -1:
            size = os.fstat(fd).st_size
        return os.read(fd, size)
    finally:
        os.close(fd)

If you know the maximum size of the file, pass that in to the size argument so you can avoid the stat call.

Here’s some all-around faster code:

for entry in os.scandir('.'):
    if entry.name.endswith('.html'):
        # on windows entry.stat(follow_symlinks=False) is free, but on unix requires a syscall.
        a = read_file_bytes(entry.path, entry.stat(follow_symlinks=False).st_size)
        a = file_bytes.decode()  # if string needed rather than bytes
Answered By: Collin Anderson
Categories: questions Tags:
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.