Limiting Memory Use in a *Large* Django QuerySet

Question:

I have a task which needs to be run on ‘most’ objects in my database once every some period of time (once a day, once a week, whatever). Basically this means that I have some query that looks like this running in it’s own thread.

for model_instance in SomeModel.objects.all():
    do_something(model_instance)

(Note that it’s actually a filter() not all() but none-the-less I still end up selecting a very large set of objects.)

The problem I’m running into is that after running for a while the thread is killed by my hosting provider because I’m using too much memory. I’m assuming all this memory use is happening because even though the QuerySet object returned by my query initially has a very small memory footprint it ends up growing as the QuerySet object caches each model_instance as I iterate through them.

My question is, “what is the best way to iterate through almost every SomeModel in my database in a memory efficient way?” or perhaps my question is “how do I ‘un-cache’ model instances from a django queryset?”

EDIT: I’m actually using the results of the queryset to build a series of new objects. As such, I don’t end up updating the queried-for objects at all.

Asked By: Chris W.

||

Answers:

I’m continuing research and it kind of looks like I want to do the equivalent of an SQL OFFSET and LIMIT, which according to Django Doc’s on Limiting Querysets means I want to use the slice syntax, e.g., SomeModel.objects.all()[15:25]

So now I’m thinking maybe something like this is what I’m looking for:

# Figure out the number of objects I can safely hold in memory
# I'll just say 100 for right now
number_of_objects = 100 
count = SomeModel.objects.all().count():
for i in xrange(0,count,number_of_objects):
    smaller_queryset = SomeModel.objects.all()[i:i+number_of_objects]
    for model_instance in smaller_queryset:
        do_something(model_instance)

By my reckoning this would make it so that smaller_queryset would never grow too large.

Answered By: Chris W.

So what I actually ended up doing is building something that you can ‘wrap’ a QuerySet in. It works by making a deepcopy of the QuerySet, using the slice syntax–e.g., some_queryset[15:45]–but then it makes another deepcopy of the original QuerySet when the slice has been completely iterated through. This means that only the set of Objects returned in ‘this’ particular slice are stored in memory.

class MemorySavingQuerysetIterator(object):

    def __init__(self,queryset,max_obj_num=1000):
        self._base_queryset = queryset
        self._generator = self._setup()
        self.max_obj_num = max_obj_num

    def _setup(self):
        for i in xrange(0,self._base_queryset.count(),self.max_obj_num):
            # By making a copy of of the queryset and using that to actually access
            # the objects we ensure that there are only `max_obj_num` objects in
            # memory at any given time
            smaller_queryset = copy.deepcopy(self._base_queryset)[i:i+self.max_obj_num]
            logger.debug('Grabbing next %s objects from DB' % self.max_obj_num)
            for obj in smaller_queryset.iterator():
                yield obj

    def __iter__(self):
        return self

    def next(self):
        return self._generator.next()

So instead of…

for obj in SomeObject.objects.filter(foo='bar'): <-- Something that returns *a lot* of Objects
    do_something(obj);

You would do…

for obj in MemorySavingQuerysetIterator(in SomeObject.objects.filter(foo='bar')):
    do_something(obj);

Please note that the intention of this is to save memory in your Python interpreter. It essentially does this by making more database queries. Usually people are trying to do the exact opposite of that–i.e., minimize database queries as much as possible without regards to memory usage. Hopefully somebody will find this useful though.

Answered By: Chris W.

You can’t use Model.objects.all().iterator() because it will fetch all the elements on your table at once. Neither can you use Model.objects.all()[offset:offset+pagesize], because it will cache the results. Either will exceed your memory limit.

I’ve tried to mix both solutions, and it worked:

offset = 0
pagesize = 1000
count = Model.objects.all().count()
while offset < count:
    for m in Model.objects.all()[offset : offset + pagesize].iterator:
        do_something with m
    offset += pagesize

Change pagesize to fit your requirements, and optionally change the [offset : offset + pagesize] to the [offset * pagesize : (offset + 1) * pagesize] idiom if it fits you better. Also, of course, replace Model by your actual model name.

Answered By: Marcos Dumay

There is a django snippet for this:

http://djangosnippets.org/snippets/1949/

It iterates over a queryset by yielding rows of smaller “chunks” of the original queryset. It ends up using significantly less memory while allowing you to tune for speed. I use it in one of my projects.

Answered By: Nick

What about using django core’s Paginator and Page objects documented here:

https://docs.djangoproject.com/en/dev/topics/pagination/

Something like this:

from django.core.paginator import Paginator
from djangoapp.models import SomeModel

paginator = Paginator(SomeModel.objects.all(), 1000) # chunks of 1000

for page_idx in range(1, paginator.num_pages):
    for row in paginator.page(page_idx).object_list:
        # here you can do what you want with the row
    print "done processing page %s" % page_idx
Answered By: mpaf

Many solutions implement sql OFFSET and LIMIT via slicing the queryset. As stefano notes, with larger datasets this becomes very inefficient. The proper way of handling this is to use server-side cursers to keep track of the OFFSET.

Native server-side cursor support is in the works for django. Until it’s ready, here is a simple implementation if you are using postgres with the psycopg2 backend:

def server_cursor_query(Table):
    table_name = Table._meta.db_table

    # There must be an existing connection before creating a server-side cursor
    if connection.connection is None:
        dummy_cursor = connection.cursor()  # not a server-side cursor

    # Optionally keep track of the columns so that we can return a QuerySet. However,
    # if your table has foreign keys, you may need to rename them appropriately
    columns = [x.name for x in Table._meta.local_fields]

    cursor = connection.connection.cursor(name='gigantic_cursor')) # a server-side
                                                                   # cursor

    with transaction.atomic():
        cursor.execute('SELECT {} FROM {} WHERE id={}'.format(
            ', '.join(columns), table_name, id))

        while True:
            rows = cursor.fetchmany(1000)

                if not rows:
                    break

                for row in rows:
                    fields = dict(zip(columns, row))
                    yield Table(**fields)

See this blog post for a great explanation of memory issues from large queries in django.

Answered By: drs

Short Answer

If you are using PostgreSQL or Oracle, you can use, Django’s builtin iterator:

queryset.iterator(chunk_size=1000)

This causes Django to use server-side cursors and not cache models as it iterates through the queryset. As of Django 4.1, this will even work with prefetch_related.

For other databases, you can use the following:

def queryset_iterator(queryset, page_size=1000):
    page = queryset.order_by("pk")[:page_size]
    while page:
        for obj in page:
            yield obj
            pk = obj.pk
        page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]

If you want to get back pages rather than individual objects to combine with other optimizations such as bulk_update, use this:

def queryset_to_pages(queryset, page_size=1000):
    page = queryset.order_by("pk")[:page_size]
    while page:
        yield page
        pk = max(obj.pk for obj in page)
        page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]

Performance Profiling on PostgreSQL

I profiled a number of different approaches on a PostgreSQL table with about 200,000 rows on Django 3.2 and Postgres 13. For every query, I added up the sum of the ids, both to ensure that Django was actually retrieving the objects and so that I could verify correctness of iteration between queries. All of the timings were taken after several iterations over the table in question to minimize caching advantages of later tests.

Basic Iteration

The basic approach is just iterating over the table. The main issue with this approach is that the amount of memory used is not constant; it grows with the size of the table, and I’ve seen this run out of memory on larger tables.

x = sum(i.id for i in MyModel.objects.all())

Wall time: 3.53 s, 22MB of memory (BAD)

Django Iterator

The Django iterator (at least as of Django 3.2) fixes the memory issue with minor performance benefit. Presumably this comes from Django spending less time managing cache.

assert sum(i.id for i in MyModel.objects.all().iterator(chunk_size=1000)) == x

Wall time: 3.11 s, <1MB of memory

Custom Iterator

The natural comparison point is attempting to do the paging ourselves by progresively increased queries on the primary key. While this is an improvement over naieve iteration in that it has constant memory, it actually loses to Django’s built-in iterator on speed because it makes more database queries.

def queryset_iterator(queryset, page_size=1000):
    page = queryset.order_by("pk")[:page_size]
    while page:
        for obj in page:
            yield obj
            pk = obj.pk
        page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]

assert sum(i.id for i in queryset_iterator(MyModel.objects.all())) == x

Wall time: 3.65 s, <1MB of memory

Custom Paging Function

The main reason to use the custom iteration is so that you can get the results in pages. This function is very useful to then plug in to bulk-updates while only using constant memory. It’s a bit slower than queryset_iterator in my tests and I don’t have a coherent theory as to why, but the slowdown isn’t substantial.

def queryset_to_pages(queryset, page_size=1000):
    page = queryset.order_by("pk")[:page_size]
    while page:
        yield page
        pk = max(obj.pk for obj in page)
        page = queryset.filter(pk__gt=pk).order_by("pk")[:page_size]

assert sum(i.id for page in queryset_to_pages(MyModel.objects.all()) for i in page) == x

Wall time: 4.49 s, <1MB of memory

Alternative Custom Paging Function

Given that Django’s queryset iterator is faster than doing paging ourselves, the queryset pager can be alternately implemented to use it. It’s a little bit faster than doing paging ourselves, but the implementation is messier. Readability matters, which is why my personal preference is the previous paging function, but this one can be better if your queryset doesn’t have a primary key in the results (for whatever reason).

def queryset_to_pages2(queryset, page_size=1000):
    page = []
    page_count = 0
    for obj in queryset.iterator():
        page.append(obj)
        page_count += 1
        if page_count == page_size:
            yield page
            page = []
            page_count = 0
    yield page

assert sum(i.id for page in queryset_to_pages2(MyModel.objects.all()) for i in page) == x

Wall time: 4.33 s, <1MB of memory


Bad Approaches

The following are approaches you should never use (many of which are suggested in the question) along with why.

Do NOT Use Slicing on an Unordered Queryset

Whatever you do, do NOT slice an unordered queryset. This does not correctly iterate over the table. The reason for this is that the slice operation does a SQL limit + offset query based on your queryset and that django querysets have no order guarantee unless you use order_by. Additionally, PostgreSQL does not have a default order by, and the Postgres docs specifically warn against using limit + offset without order by. As a result, each time you take a slice, you are getting a non-deterministic slice of your table, which means your slices may not be overlapping and won’t cover all rows of the table between them. In my experience, this only happens if something else is modifying data in the table while you are doing the iteration, which only makes this problem more pernicious because it means the bug might not show up if you are testing your code in isolation.

def very_bad_iterator(queryset, page_size=1000):
    counter = 0
    count = queryset.count()
    while counter < count:     
        for model in queryset[counter:counter+page_size].iterator():
            yield model
        counter += page_size

assert sum(i.id for i in very_bad_iterator(MyModel.objects.all())) == x

Assertion Error; i.e. INCORRECT RESULT COMPUTED!!!

Do NOT use Slicing for Whole-Table Iteration in General

Even if we order the queryset, list slicing is abysmal from a performance perspective. This is because SQL offset is a linear time operation, which means that a limit + offset paged iteration of a table will be quadratic time, which you absolutely do not want.

def bad_iterator(queryset, page_size=1000):
    counter = 0
    count = queryset.count()
    while counter < count:     
        for model in queryset.order_by("id")[counter:counter+page_size].iterator():
            yield model
        counter += page_size

assert sum(i.id for i in bad_iterator(MyModel.objects.all())) == x

Wall time: 15s (BAD), <1MB of memory

Do NOT use Django’s Paginator for Whole-Table Iteration

Django comes with a built-in Paginator. It may be tempting to think that is appropriate for doing a paged iteration of a database, but it is not. The point of Paginator is for returning a single page of a result to a UI or an API endpoint. It is substantially slower than any of the good apporaches at iterating over a table.

from django.core.paginator import Paginator

def bad_paged_iterator(queryset, page_size=1000):
    p = Paginator(queryset.order_by("pk"), page_size)
    for i in p.page_range:
        yield p.get_page(i)
        
assert sum(i.id for page in bad_paged_iterator(MyModel.objects.all()) for i in page) == x

Wall time: 13.1 s (BAD), <1MB of memory

Answered By: Zags

The following approach doesn’t use an expensive database offset query and avoids calculating the page number, making it more efficient.
Limitations specified in the docstring.

def queryset_pk_iterator(queryset, batch_size=1000):
    """
    Iterator that splits the queryset into batches to reduce memory consumption.
    Useful in cases where builtin .iterator() method of the queryset skips the "prefetch_related" optimization.

    :param queryset: Queryset to iterate over. The supplied queryset must not specify order and limit/offset.
        Queryset objects must have a monotonically increasing and ordering primary key.
    :param batch_size: Size of the batches into which to split the queryset.
    :return: iterator object
    """
    pk = None
    while True:
        batch_queryset = queryset.order_by('pk')
        if pk is not None:
            batch_queryset = batch_queryset.filter(pk__gt=pk)
        batch_queryset = batch_queryset[:batch_size]
        obj = None
        for obj in batch_queryset:
            yield obj
        if obj is None:
            return
        pk = obj.pk
Answered By: dtatarkin