How to read a large json in pandas?

Question:

My code is :data_review=pd.read_json('review.json')

I have the data review as fllow:

{
    // string, 22 character unique review id
    "review_id": "zdSx_SD6obEhz9VrW9uAWA",

    // string, 22 character unique user id, maps to the user in user.json
    "user_id": "Ha3iJu77CxlrFm-vQRs_8g",

    // string, 22 character business id, maps to business in business.json
    "business_id": "tnhfDv5Il8EaGSXZGiuQGg",

    // integer, star rating
    "stars": 4,

    // string, date formatted YYYY-MM-DD
    "date": "2016-03-09",

    // string, the review itself
    "text": "Great place to hang out after work: the prices are decent, and the ambience is fun. It's a bit loud, but very lively. The staff is friendly, and the food is good. They have a good selection of drinks.",

    // integer, number of useful votes received
    "useful": 0,

    // integer, number of funny votes received
    "funny": 0,

    // integer, number of cool votes received
    "cool": 0
}

But I got the follow error:

    333             fh, handles = _get_handle(filepath_or_buffer, 'r',
    334                                       encoding=encoding)
--> 335             json = fh.read()
    336             fh.close()
    337         else:

OSError: [Errno 22] Invalid argument

My jsonfile do not contain any comments and 3.8G!
I just download the file from here to practice link

When I use the follow code,throw the same error:

import json
with open('review.json') as json_file:
    data = json.load(json_file)
Asked By: ileadall42

||

Answers:

Perhaps, the file you are reading contains multiple json objects rather and than a single json or array object which the methods json.load(json_file) and pd.read_json('review.json') are expecting. These methods are supposed to read files with single json object.

From the yelp dataset I have seen, your file must be containing something like:

{"review_id":"xxxxx","user_id":"xxxxx","business_id":"xxxx","stars":5,"date":"xxx-xx-xx","text":"xyxyxyxyxx","useful":0,"funny":0,"cool":0}
{"review_id":"yyyy","user_id":"yyyyy","business_id":"yyyyy","stars":3,"date":"yyyy-yy-yy","text":"ababababab","useful":0,"funny":0,"cool":0}
....    
....

and so on.

Hence, it is important to realize that this is not single json data rather it is multiple json objects in one file.

To read this data into pandas data frame the following solution should work:

import pandas as pd

with open('review.json') as json_file:      
    data = json_file.readlines()
    # this line below may take at least 8-10 minutes of processing for 4-5 million rows. It converts all strings in list to actual json objects. 
    data = list(map(json.loads, data)) 

pd.DataFrame(data)

Assuming the size of data to be pretty large, I think your machine will take considerable amount of time to load the data into data frame.

Answered By: Shaurya Mittal

If you don’t want to use a for-loop, the following should do the trick:

import pandas as pd

df = pd.read_json("foo.json", lines=True)

This will handle the case where your json file looks similar to this:

{"foo": "bar"}
{"foo": "baz"}
{"foo": "qux"}

And will turn it into a DataFrame consisting of a single column, foo, with three rows.

You can read more at Panda’s docs

Answered By: Mant1c0r3

Using the arg lines=True and chunksize=X will create a reader that get specific number of lines.

Then you have to make a loop to display each chunk.

Here is a piece of code for you to understand :

import pandas as pd
import json
chunks = pd.read_json('../input/data.json', lines=True, chunksize = 10000)
for chunk in chunks:
    print(chunk)
    break

Chunks create a multiple of chunks according to the lenght of your json (talking in lines).
For example, I have a 100 000 lines json with X objects in it, if I do chunksize = 10 000, I will have 10 chunks.

In the code that I gave I added a break in order to just print the first chunk but if you remove it, you will have 10 chunks one by one.

Answered By: Max

If your json file contains multiple object instead of one object, the following should work:

import json

data = []
for line in open('sample.json', 'r'):
    data.append(json.loads(line))

Notice the difference between json.load and json.loads.

json.loads() expects a (valid) JSON string – i.e. {"foo": "bar"}. So, if your json file looks like what @Mant1c0r3 mentioned, then json.loads would be appropriate.

Answered By: mOna

I’m improvising Max’s answer to load a large json file into a dataframe without running into memory errors:

You could use the following code and you wont run into any issues.

reviews = pd.DataFrame()
for chunk in chunks:
  reviews = pd.concat([reviews, chunk])
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.