I have some json that\'s being read from a file where each row looks something like this:
{
\"id\": \"someGu
You can try from_json function to convert the column/field from StructType into MapType, explode and then find your desired fields. for you example JSON, you will need to do this several times:
from pyspark.sql.functions import explode, from_json, to_json, json_tuple, coalesce
df.select(explode(from_json(to_json('data.data.players'),"map<string,string>"))) \
.select(json_tuple('value', 'locationId', 'id', 'name', 'assets', 'dict').alias('Location', 'Player_ID', 'Player', 'assets', 'dict')) \
.select('*', explode(from_json(coalesce('assets','dict'),"map<string,struct<isActive:boolean,playlists:string>>"))) \
.selectExpr(
'Location',
'Player_ID',
'Player',
'key as Asset_ID',
'value.isActive',
'explode(from_json(value.playlists, "map<string,string>")) as (Playlist_ID, Playlist_Status)'
) \
.show()
+--------+---------+--------+--------+--------+------------+---------------+
|Location|Player_ID| Player|Asset_ID|isActive| Playlist_ID|Playlist_Status|
+--------+---------+--------+--------+--------+------------+---------------+
|someGuid| player_1|someName|assetId1| true| someId1| true|
|someGuid| player_1|someName|assetId1| true|someOtherId1| false|
|someGuid| player_1|someName|assetId2| true| someId1| true|
|someGuid| player_2|someName|assetId3| true| someId1| true|
|someGuid| player_2|someName|assetId3| true|someOtherId1| false|
|someGuid| player_2|someName|assetId4| true| someId1| true|
+--------+---------+--------+--------+--------+------------+---------------+