Splitting dictionary/list inside a Pandas Column into Separate Columns

后端 未结 12 1323
南方客
南方客 2020-11-22 02:50

I have data saved in a postgreSQL database. I am querying this data using Python2.7 and turning it into a Pandas DataFrame. However, the last column of this dat

12条回答
  •  无人及你
    2020-11-22 03:36

    1. pd.json_normalize(df.Pollutants) is significantly faster than df.Pollutants.apply(pd.Series)
      • See the %%timeit below. For 1M rows, .json_normalize is 47 times faster than .apply.
    2. Whether reading data from a file, or from an object returned by a database, or API, it may not be clear if the dict column has dict or str type.
      • If the dictionaries in the column are strings, they must be converted back to a dict type, using ast.literal_eval.
    3. Use pd.json_normalize to convert the dicts, with keys as headers and values for rows.
      • Has additional parameters (e.g. record_path & meta) for dealing with nested dicts.
    4. Use pandas.DataFrame.join to combine the original DataFrame, df, with the columns created using pd.json_normalize
      • If the index isn't integers (as in the example), first use df.reset_index() to get an index of integers, before doing the normalize and join.
    5. Finally, use pandas.DataFrame.drop, to remove the unneeded column of dicts
    • As a note, if the column has any NaN, they must be filled with an empty dict
      • df.Pollutants = df.Pollutants.fillna({i: {} for i in df.index})
        • If the 'Pollutants' column is strings, use '{}'.
        • Also see How to json_normalize a column with NaNs?.
    import pandas as pd
    from ast import literal_eval
    import numpy as np
    
    data = {'Station ID': [8809, 8810, 8811, 8812, 8813, 8814],
            'Pollutants': ['{"a": "46", "b": "3", "c": "12"}', '{"a": "36", "b": "5", "c": "8"}', '{"b": "2", "c": "7"}', '{"c": "11"}', '{"a": "82", "c": "15"}', np.nan]}
    
    df = pd.DataFrame(data)
    
    # display(df)
       Station ID                        Pollutants
    0        8809  {"a": "46", "b": "3", "c": "12"}
    1        8810   {"a": "36", "b": "5", "c": "8"}
    2        8811              {"b": "2", "c": "7"}
    3        8812                       {"c": "11"}
    4        8813            {"a": "82", "c": "15"}
    5        8814                               NaN
    
    # replace NaN with '{}' if the column is strings, otherwise replace with {}
    # df.Pollutants = df.Pollutants.fillna('{}')  # if the NaN is in a column of strings
    df.Pollutants = df.Pollutants.fillna({i: {} for i in df.index})  # if the column is not strings
    
    # Convert the column of stringified dicts to dicts
    # skip this line, if the column contains dicts
    df.Pollutants = df.Pollutants.apply(literal_eval)
    
    # reset the index if the index is not unique integers from 0 to n-1
    # df.reset_index(inplace=True)  # uncomment if needed
    
    # normalize the column of dictionaries and join it to df
    df = df.join(pd.json_normalize(df.Pollutants))
    
    # drop Pollutants
    df.drop(columns=['Pollutants'], inplace=True)
    
    # display(df)
       Station ID    a    b    c
    0        8809   46    3   12
    1        8810   36    5    8
    2        8811  NaN    2    7
    3        8812  NaN  NaN   11
    4        8813   82  NaN   15
    5        8814  NaN  NaN  NaN
    

    %%timeit

    # dataframe with 1M rows
    dfb = pd.concat([df]*200000).reset_index(drop=True)
    
    %%timeit
    dfb.join(pd.json_normalize(dfb.Pollutants))
    [out]:
    5.44 s ± 32.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
    
    %%timeit
    pd.concat([dfb.drop(columns=['Pollutants']), dfb.Pollutants.apply(pd.Series)], axis=1)
    [out]:
    4min 17s ± 2.44 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
    

提交回复
热议问题