Dealing with non-uniform JSON columns in spark dataframe
Question:
I would like to know what is the best practice for reading a newline delimited JSON file into a dataframe. Critically, one of the (required) fields in each record maps to an object that is not guaranteed to have the same sub-fields (ie the schema is non-uniform across all the records).
For example, an input file might look like:
{"id": 1, "type": "foo", "data": {"key0": "foo", "key2": "meh"}}
{"id": 2, "type": "bar", "data": {"key2": "poo", "key3": "pants"}}
{"id": 3, "type": "baz", "data": {"key3": "moo"}}
In this case the fields id
, type
, and data
will be present in all records, but the struct mapped to by data
will have a heterogeneous schema.
I have two approaches for dealing with the non-uniformity of the data
column:
- let Spark infer the schema:
df = spark.read.options(samplingRatio=1.0).json('s3://bucket/path/to/newline_separated_json.txt')
The obvious issue with this approach is the need to sample every record to determine the super-set of fields/schemas that will be the final schema. This may be prohibitively expensive given a dataset in the low 100s of millions of records? Or…
- Tell Spark to coerce the data field into a JSON string, and then just have a schema comprised of three top-level string fields,
id
, type
, data
. And here I’m not really sure the best way to proceed. For example I am assuming just declaring the data
field to be a string as in the following, will not work because it is not explicitly doing the equivalent of json.dumps
?
schema = StructType([
StructField("id", StringType(), true),
StructField("type", StringType(), true),
StructField("data", StringType(), true)
])
df = spark.read.json('s3://bucket/path/to/newline_separated_json.txt', schema=schema)
If I want to avoid the cost of scanning the full dataset incurred by option 1, what is the best way to ingest this file and keep the data
field as a JSON string?
Thank you
Answers:
I think your attempt and the overall idea is in the right direction. Here are two more approaches based on the build-in options aka get_json_object
/from_json
via dataframe API and using map
transformation along with python’s json.dumps()
and json.loads()
via the RDD API.
Option 1: get_json_object() / from_json()
First let’s try with get_json_object()
which doesn’t require a schema:
import pyspark.sql.functions as f
df = spark.createDataFrame([
('{"id": 1, "type": "foo", "data": {"key0": "foo", "key2": "meh"}}'),
('{"id": 2, "type": "bar", "data": {"key2": "poo", "key3": "pants"}}'),
('{"id": 3, "type": "baz", "data": {"key3": "moo"}}')
], StringType())
df.select(f.get_json_object("value", "$.id").alias("id"),
f.get_json_object("value", "$.type").alias("type"),
f.get_json_object("value", "$.data").alias("data"))
# +---+----+-----------------------------+
# |id |type|data |
# +---+----+-----------------------------+
# |1 |foo |{"key0":"foo","key2":"meh"} |
# |2 |bar |{"key2":"poo","key3":"pants"}|
# |3 |baz |{"key3":"moo"} |
# +---+----+-----------------------------+
On the contrary from_json()
requires a schema definition:
from pyspark.sql.types import StringType, StructType, StructField
import pyspark.sql.functions as f
df = spark.createDataFrame([
('{"id": 1, "type": "foo", "data": {"key0": "foo", "key2": "meh"}}'),
('{"id": 2, "type": "bar", "data": {"key2": "poo", "key3": "pants"}}'),
('{"id": 3, "type": "baz", "data": {"key3": "moo"}}')
], StringType())
schema = StructType([
StructField("id", StringType(), True),
StructField("type", StringType(), True),
StructField("data", StringType(), True)
])
df.select(f.from_json("value", schema).getItem("id").alias("id"),
f.from_json("value", schema).getItem("type").alias("type"),
f.from_json("value", schema).getItem("data").alias("data"))
# +---+----+-----------------------------+
# |id |type|data |
# +---+----+-----------------------------+
# |1 |foo |{"key0":"foo","key2":"meh"} |
# |2 |bar |{"key2":"poo","key3":"pants"}|
# |3 |baz |{"key3":"moo"} |
# +---+----+-----------------------------+
Option 2: map/RDD API + json.dumps()
from pyspark.sql.types import StringType, StructType, StructField
import json
df = spark.createDataFrame([
'{"id": 1, "type": "foo", "data": {"key0": "foo", "key2": "meh"}}',
'{"id": 2, "type": "bar", "data": {"key2": "poo", "key3": "pants"}}',
'{"id": 3, "type": "baz", "data": {"key3": "moo"}}'
], StringType())
def from_json(data):
row = json.loads(data[0])
return (row['id'], row['type'], json.dumps(row['data']))
json_rdd = df.rdd.map(from_json)
schema = StructType([
StructField("id", StringType(), True),
StructField("type", StringType(), True),
StructField("data", StringType(), True)
])
spark.createDataFrame(json_rdd, schema).show(10, False)
# +---+----+--------------------------------+
# |id |type|data |
# +---+----+--------------------------------+
# |1 |foo |{"key2": "meh", "key0": "foo"} |
# |2 |bar |{"key2": "poo", "key3": "pants"}|
# |3 |baz |{"key3": "moo"} |
# +---+----+--------------------------------+
Function from_json
will transform the string row into a tuple of (id, type, data)
. json.loads() will parse the json string and return a dictionary through which we generate and return the final tuple.
I recommend looking into Rumble to query, on Spark, heterogeneous JSON datasets that do not fit in DataFrames. This is precisely the problem it solves. It is free and open-source.
For example:
for $i in json-file("s3://bucket/path/to/newline_separated_json.txt")
where keys($i.data) = "key2" (: keeping only those objects that have a key2 :)
group by $type := $i.type
return {
"type" : $type,
"key2-values" : [ $i.data.key2 ]
}
(Disclaimer: I am part of the team.)
I would like to know what is the best practice for reading a newline delimited JSON file into a dataframe. Critically, one of the (required) fields in each record maps to an object that is not guaranteed to have the same sub-fields (ie the schema is non-uniform across all the records).
For example, an input file might look like:
{"id": 1, "type": "foo", "data": {"key0": "foo", "key2": "meh"}}
{"id": 2, "type": "bar", "data": {"key2": "poo", "key3": "pants"}}
{"id": 3, "type": "baz", "data": {"key3": "moo"}}
In this case the fields id
, type
, and data
will be present in all records, but the struct mapped to by data
will have a heterogeneous schema.
I have two approaches for dealing with the non-uniformity of the data
column:
- let Spark infer the schema:
df = spark.read.options(samplingRatio=1.0).json('s3://bucket/path/to/newline_separated_json.txt')
The obvious issue with this approach is the need to sample every record to determine the super-set of fields/schemas that will be the final schema. This may be prohibitively expensive given a dataset in the low 100s of millions of records? Or…
- Tell Spark to coerce the data field into a JSON string, and then just have a schema comprised of three top-level string fields,
id
,type
,data
. And here I’m not really sure the best way to proceed. For example I am assuming just declaring thedata
field to be a string as in the following, will not work because it is not explicitly doing the equivalent ofjson.dumps
?
schema = StructType([
StructField("id", StringType(), true),
StructField("type", StringType(), true),
StructField("data", StringType(), true)
])
df = spark.read.json('s3://bucket/path/to/newline_separated_json.txt', schema=schema)
If I want to avoid the cost of scanning the full dataset incurred by option 1, what is the best way to ingest this file and keep the data
field as a JSON string?
Thank you
I think your attempt and the overall idea is in the right direction. Here are two more approaches based on the build-in options aka get_json_object
/from_json
via dataframe API and using map
transformation along with python’s json.dumps()
and json.loads()
via the RDD API.
Option 1: get_json_object() / from_json()
First let’s try with get_json_object()
which doesn’t require a schema:
import pyspark.sql.functions as f
df = spark.createDataFrame([
('{"id": 1, "type": "foo", "data": {"key0": "foo", "key2": "meh"}}'),
('{"id": 2, "type": "bar", "data": {"key2": "poo", "key3": "pants"}}'),
('{"id": 3, "type": "baz", "data": {"key3": "moo"}}')
], StringType())
df.select(f.get_json_object("value", "$.id").alias("id"),
f.get_json_object("value", "$.type").alias("type"),
f.get_json_object("value", "$.data").alias("data"))
# +---+----+-----------------------------+
# |id |type|data |
# +---+----+-----------------------------+
# |1 |foo |{"key0":"foo","key2":"meh"} |
# |2 |bar |{"key2":"poo","key3":"pants"}|
# |3 |baz |{"key3":"moo"} |
# +---+----+-----------------------------+
On the contrary from_json()
requires a schema definition:
from pyspark.sql.types import StringType, StructType, StructField
import pyspark.sql.functions as f
df = spark.createDataFrame([
('{"id": 1, "type": "foo", "data": {"key0": "foo", "key2": "meh"}}'),
('{"id": 2, "type": "bar", "data": {"key2": "poo", "key3": "pants"}}'),
('{"id": 3, "type": "baz", "data": {"key3": "moo"}}')
], StringType())
schema = StructType([
StructField("id", StringType(), True),
StructField("type", StringType(), True),
StructField("data", StringType(), True)
])
df.select(f.from_json("value", schema).getItem("id").alias("id"),
f.from_json("value", schema).getItem("type").alias("type"),
f.from_json("value", schema).getItem("data").alias("data"))
# +---+----+-----------------------------+
# |id |type|data |
# +---+----+-----------------------------+
# |1 |foo |{"key0":"foo","key2":"meh"} |
# |2 |bar |{"key2":"poo","key3":"pants"}|
# |3 |baz |{"key3":"moo"} |
# +---+----+-----------------------------+
Option 2: map/RDD API + json.dumps()
from pyspark.sql.types import StringType, StructType, StructField
import json
df = spark.createDataFrame([
'{"id": 1, "type": "foo", "data": {"key0": "foo", "key2": "meh"}}',
'{"id": 2, "type": "bar", "data": {"key2": "poo", "key3": "pants"}}',
'{"id": 3, "type": "baz", "data": {"key3": "moo"}}'
], StringType())
def from_json(data):
row = json.loads(data[0])
return (row['id'], row['type'], json.dumps(row['data']))
json_rdd = df.rdd.map(from_json)
schema = StructType([
StructField("id", StringType(), True),
StructField("type", StringType(), True),
StructField("data", StringType(), True)
])
spark.createDataFrame(json_rdd, schema).show(10, False)
# +---+----+--------------------------------+
# |id |type|data |
# +---+----+--------------------------------+
# |1 |foo |{"key2": "meh", "key0": "foo"} |
# |2 |bar |{"key2": "poo", "key3": "pants"}|
# |3 |baz |{"key3": "moo"} |
# +---+----+--------------------------------+
Function from_json
will transform the string row into a tuple of (id, type, data)
. json.loads() will parse the json string and return a dictionary through which we generate and return the final tuple.
I recommend looking into Rumble to query, on Spark, heterogeneous JSON datasets that do not fit in DataFrames. This is precisely the problem it solves. It is free and open-source.
For example:
for $i in json-file("s3://bucket/path/to/newline_separated_json.txt")
where keys($i.data) = "key2" (: keeping only those objects that have a key2 :)
group by $type := $i.type
return {
"type" : $type,
"key2-values" : [ $i.data.key2 ]
}
(Disclaimer: I am part of the team.)