How do I re.search or re.match on a whole file without reading it all into memory?

Question:

I want to be able to run a regular expression on an entire file, but I’d like to be able to not have to read the whole file into memory at once as I may be working with rather large files in the future. Is there a way to do this? Thanks!

Clarification: I cannot read line-by-line because it can span multiple lines.

Asked By: Evan Fosmark

||

Answers:

This depends on the file and the regex. The best thing you could do would be to read the file in line by line but if that does not work for your situation then might get stuck with pulling the whole file into memory.

Lets say for example that this is your file:

Lorem ipsum dolor sit amet, consectetur
adipiscing elit. Ut fringilla pede blandit
eros sagittis viverra. Curabitur facilisis
urna ABC elementum lacus molestie aliquet.
Vestibulum lobortis semper risus. Etiam
sollicitudin. Vivamus posuere mauris eu
nulla. Nunc nisi. Curabitur fringilla fringilla
elit. Nullam feugiat, metus et suscipit
fermentum, mauris ipsum blandit purus,
non vehicula purus felis sit amet tortor.
Vestibulum odio. Mauris dapibus ultricies
metus. Cras XYZ eu lectus. Cras elit turpis,
ultrices nec, commodo eu, sodales non, erat.
Quisque accumsan, nunc nec porttitor vulputate,
erat dolor suscipit quam, a tristique justo
turpis at erat.

And this was your regex:

consectetur(?=sadipiscing)

Now this regex uses positive lookahead and will only match a string of “consectetur” if it is immediately followed by any whitepace character and then a string of “adipiscing”.

So in this example you would have to read the whole file into memory because your regex is depending on the entire file being parsed as a single string. This is one of many examples that would require you to have your entire string in memory for a particular regex to work.

I guess the unfortunate answer is that it all depends on your situation.

Answered By: Andrew Hare

For single line patterns you can iterate over the lines of the file, but for multi-line patterns, You will have to read all (or part, but that’ll be hard to keep track of) of the file into memory.

Answered By: sykora

Open the file and iterate over the lines.

fd = open('myfile')
for line in fd:
    if re.match(...,line)
        print line
Answered By: Mark Harrison

This is one way:

import re

REGEX = 'd+'

with open('/tmp/workfile', 'r') as f:
      for line in f:
          print re.match(REGEX,line)
  1. with operator in python 2.5 takes of automatic file closure. Hence you need not worry about it.
  2. iterator over the file object is memory efficient. that is it wont read more than a line of memory at a given time.
  3. But the draw back of this approach is that it would take a lot of time for huge files.

Another approach which comes to my mind is to use read(size) and file.seek(offset) method, which will read a portion of the file size at a time.

import re

REGEX = 'd+'

with open('/tmp/workfile', 'r') as f:
      filesize = f.size()
      part = filesize / 10 # a suitable size that you can determine ahead or in the prog.
      position = 0 
      while position <= filesize: 
          content = f.read(part)
          print re.match(REGEX,content)
          position = position + part
          f.seek(position)

You can also combine these two there you can create generator that would return contents a certain bytes at the time and iterate through that content to check your regex. This IMO would be a good approach.

Answered By: Senthil Kumaran

If this is a big deal and worth some effort, you can convert the regular expression into a finite state machine which reads the file. The FSM can be of O(n) complexity which means it will be a lot faster as the file size gets big.

You will be able to efficiently match patterns that span lines in files too large to fit in memory.

Here are two places that describe the algorithm for converting a regular expression to a FSM:

Answered By: Mark Harrison

You can use mmap to map the file to memory. The file contents can then be accessed like a normal string:

import re, mmap

with open('/var/log/error.log', 'r+') as f:
  data = mmap.mmap(f.fileno(), 0)
  mo = re.search('error: (.*)', data)
  if mo:
    print "found error", mo.group(1)

This also works for big files, the file content is internally loaded from disk as needed.

Answered By: sth
f = open(filename,'r')
  for eachline in f:
    string=re.search("(<tr align="right"><td>)([0-9]*)(</td><td>)([a-zA-Z]*)(</td><td>)([a-zA-Z]*)(</td>)",eachline)
    if string:
      for i in range (2,8,2):
        add = string.group(i)
        l.append(add)
Answered By: Atul Dhingra

Here’s an option for you using re and mmap to find all the words in a file that doesn’t build lists or load the whole file into memory.

import re
from contextlib import closing
from mmap import mmap, ACCESS_READ

with open('filepath.txt', 'r') as f:
    with closing(mmap(f.fileno(), 0, access=ACCESS_READ)) as d:
        print(sum(1 for _ in re.finditer(b'w+', d)))

based on @sth’s answer but less memory usage

Answered By: Jab

Python 3:
To load file as one big string use read() and decode() methods

import re, mmap


def read_search_in_file(file):
    with open('/var/log/error.log', 'r+') as f:
        data = mmap.mmap(f.fileno(), 0).read().decode("utf-8")
        error = re.search(r'error: (.*)', data)
  if error:
    return error.group(1)
Answered By: ggguser
Categories: questions Tags: , , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.