Reading rather large JSON files

Question:

I have some large json encoded files. The smallest is 300MB; the rest are multiple GB, anywhere from around 2GB to 10GB+.

I seem to run out of memory when trying to load the files in Python.

I tried using this code to test performance:

from datetime import datetime
import json

print datetime.now()

f = open('file.json', 'r')
json.load(f)
f.close()

print datetime.now()

Not too surprisingly, this causes a MemoryError. It appears that json.load() calls json.loads(f.read()), which is trying to dump the entire file into memory first, which clearly isn’t going to work.

How I can solve this cleanly?


I know this is old, but I don’t think this is a duplicate. While the answer is the same, the question is different. In the "duplicate", the question is how to read large files efficiently, whereas this question deals with files that won’t even fit in to memory at all. Efficiency isn’t required.

Asked By: Tom Carrick

||

Answers:

The issue here is that JSON, as a format, is generally parsed in full and then handled in-memory, which for such a large amount of data is clearly problematic.

The solution to this is to work with the data as a stream – reading part of the file, working with it, and then repeating.

The best option appears to be using something like ijson – a module that will work with JSON as a stream, rather than as a block file.

Edit: Also worth a look – kashif’s comment about json-streamer and Henrik Heino’s comment about bigjson.

Answered By: Gareth Latty
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.