Dealing with non-uniform JSON columns in spark dataframe

后端 未结 2 843
闹比i
闹比i 2021-01-14 07:48

I would like to know what is the best practice for reading a newline delimited JSON file into a dataframe. Critically, one of the (required) fields in each record maps to an

相关标签:
2条回答
  • 2021-01-14 08:37

    I think your attempt and the overall idea is in the right direction. Here are two more approaches based on the build-in options aka get_json_object/from_json via dataframe API and using map transformation along with python's json.dumps() and json.loads() via the RDD API.

    Option 1: get_json_object() / from_json()

    First let's try with get_json_object() which doesn't require a schema:

    import pyspark.sql.functions as f
    
    df = spark.createDataFrame([
      ('{"id": 1, "type": "foo", "data": {"key0": "foo", "key2": "meh"}}'),
      ('{"id": 2, "type": "bar", "data": {"key2": "poo", "key3": "pants"}}'),
      ('{"id": 3, "type": "baz", "data": {"key3": "moo"}}')
    ], StringType())
    
    df.select(f.get_json_object("value", "$.id").alias("id"), \
              f.get_json_object("value", "$.type").alias("type"), \
               f.get_json_object("value", "$.data").alias("data"))
    
    # +---+----+-----------------------------+
    # |id |type|data                         |
    # +---+----+-----------------------------+
    # |1  |foo |{"key0":"foo","key2":"meh"}  |
    # |2  |bar |{"key2":"poo","key3":"pants"}|
    # |3  |baz |{"key3":"moo"}               |
    # +---+----+-----------------------------+
    

    On the contrary from_json() requires a schema definition:

    from pyspark.sql.types import StringType, StructType, StructField
    import pyspark.sql.functions as f
    
    df = spark.createDataFrame([
      ('{"id": 1, "type": "foo", "data": {"key0": "foo", "key2": "meh"}}'),
      ('{"id": 2, "type": "bar", "data": {"key2": "poo", "key3": "pants"}}'),
      ('{"id": 3, "type": "baz", "data": {"key3": "moo"}}')
    ], StringType())
    
    schema = StructType([
        StructField("id", StringType(), True),
        StructField("type", StringType(), True),
        StructField("data", StringType(), True)
    ])
    
    df.select(f.from_json("value", schema).getItem("id").alias("id"), \
             f.from_json("value", schema).getItem("type").alias("type"), \
             f.from_json("value", schema).getItem("data").alias("data"))
    
    # +---+----+-----------------------------+
    # |id |type|data                         |
    # +---+----+-----------------------------+
    # |1  |foo |{"key0":"foo","key2":"meh"}  |
    # |2  |bar |{"key2":"poo","key3":"pants"}|
    # |3  |baz |{"key3":"moo"}               |
    # +---+----+-----------------------------+
    

    Option 2: map/RDD API + json.dumps()

    from pyspark.sql.types import StringType, StructType, StructField
    import json
    
    df = spark.createDataFrame([
      '{"id": 1, "type": "foo", "data": {"key0": "foo", "key2": "meh"}}',
      '{"id": 2, "type": "bar", "data": {"key2": "poo", "key3": "pants"}}',
      '{"id": 3, "type": "baz", "data": {"key3": "moo"}}'
    ], StringType())
    
    def from_json(data):
      row = json.loads(data[0])
      return (row['id'], row['type'], json.dumps(row['data']))
    
    json_rdd = df.rdd.map(from_json)
    
    schema = StructType([
        StructField("id", StringType(), True),
        StructField("type", StringType(), True),
        StructField("data", StringType(), True)
    ])
    
    spark.createDataFrame(json_rdd, schema).show(10, False)
    
    # +---+----+--------------------------------+
    # |id |type|data                            |
    # +---+----+--------------------------------+
    # |1  |foo |{"key2": "meh", "key0": "foo"}  |
    # |2  |bar |{"key2": "poo", "key3": "pants"}|
    # |3  |baz |{"key3": "moo"}                 |
    # +---+----+--------------------------------+
    
    

    Function from_json will transform the string row into a tuple of (id, type, data). json.loads() will parse the json string and return a dictionary through which we generate and return the final tuple.

    0 讨论(0)
  • 2021-01-14 08:38

    I recommend looking into Rumble to query, on Spark, heterogeneous JSON datasets that do not fit in DataFrames. This is precisely the problem it solves. It is free and open-source.

    For example:

    for $i in json-file("s3://bucket/path/to/newline_separated_json.txt")
    where keys($i.data) = "key2" (: keeping only those objects that have a key2 :)
    group by $type := $i.type
    return {
      "type" : $type,
      "key2-values" : [ $i.data.key2 ]
    }
    

    (Disclaimer: I am part of the team.)

    0 讨论(0)
提交回复
热议问题