Splitting dictionary/list inside a Pandas Column into Separate Columns

后端 未结 12 1335
南方客
南方客 2020-11-22 02:50

I have data saved in a postgreSQL database. I am querying this data using Python2.7 and turning it into a Pandas DataFrame. However, the last column of this dat

相关标签:
12条回答
  • 2020-11-22 03:30

    I've concatenated those steps in a method, you have to pass only the dataframe and the column which contains the dict to expand:

    def expand_dataframe(dw: pd.DataFrame, column_to_expand: str) -> pd.DataFrame:
        """
        dw: DataFrame with some column which contain a dict to expand
            in columns
        column_to_expand: String with column name of dw
        """
        import pandas as pd
    
        def convert_to_dict(sequence: str) -> Dict:
            import json
            s = sequence
            json_acceptable_string = s.replace("'", "\"")
            d = json.loads(json_acceptable_string)
            return d    
    
        expanded_dataframe = pd.concat([dw.drop([column_to_expand], axis=1),
                                        dw[column_to_expand]
                                        .apply(convert_to_dict)
                                        .apply(pd.Series)],
                                        axis=1)
        return expanded_dataframe
    
    0 讨论(0)
  • 2020-11-22 03:31

    I know the question is quite old, but I got here searching for answers. There is actually a better (and faster) way now of doing this using json_normalize:

    import pandas as pd
    
    df2 = pd.json_normalize(df['Pollutant Levels'])
    

    This avoids costly apply functions...

    0 讨论(0)
  • 2020-11-22 03:33

    One line solution is following:

    >>> df = pd.concat([df['Station ID'], df['Pollutants'].apply(pd.Series)], axis=1)
    >>> print(df)
       Station ID    a    b   c
    0        8809   46    3  12
    1        8810   36    5   8
    2        8811  NaN    2   7
    3        8812  NaN  NaN  11
    4        8813   82  NaN  15
    
    0 讨论(0)
  • 2020-11-22 03:33

    my_df = pd.DataFrame.from_dict(my_dict, orient='index', columns=['my_col'])

    .. would have parsed the dict properly (putting each dict key into a separate df column, and key values into df rows), so the dicts would not get squashed into a single column in the first place.

    0 讨论(0)
  • 2020-11-22 03:36
    1. pd.json_normalize(df.Pollutants) is significantly faster than df.Pollutants.apply(pd.Series)
      • See the %%timeit below. For 1M rows, .json_normalize is 47 times faster than .apply.
    2. Whether reading data from a file, or from an object returned by a database, or API, it may not be clear if the dict column has dict or str type.
      • If the dictionaries in the column are strings, they must be converted back to a dict type, using ast.literal_eval.
    3. Use pd.json_normalize to convert the dicts, with keys as headers and values for rows.
      • Has additional parameters (e.g. record_path & meta) for dealing with nested dicts.
    4. Use pandas.DataFrame.join to combine the original DataFrame, df, with the columns created using pd.json_normalize
      • If the index isn't integers (as in the example), first use df.reset_index() to get an index of integers, before doing the normalize and join.
    5. Finally, use pandas.DataFrame.drop, to remove the unneeded column of dicts
    • As a note, if the column has any NaN, they must be filled with an empty dict
      • df.Pollutants = df.Pollutants.fillna({i: {} for i in df.index})
        • If the 'Pollutants' column is strings, use '{}'.
        • Also see How to json_normalize a column with NaNs?.
    import pandas as pd
    from ast import literal_eval
    import numpy as np
    
    data = {'Station ID': [8809, 8810, 8811, 8812, 8813, 8814],
            'Pollutants': ['{"a": "46", "b": "3", "c": "12"}', '{"a": "36", "b": "5", "c": "8"}', '{"b": "2", "c": "7"}', '{"c": "11"}', '{"a": "82", "c": "15"}', np.nan]}
    
    df = pd.DataFrame(data)
    
    # display(df)
       Station ID                        Pollutants
    0        8809  {"a": "46", "b": "3", "c": "12"}
    1        8810   {"a": "36", "b": "5", "c": "8"}
    2        8811              {"b": "2", "c": "7"}
    3        8812                       {"c": "11"}
    4        8813            {"a": "82", "c": "15"}
    5        8814                               NaN
    
    # replace NaN with '{}' if the column is strings, otherwise replace with {}
    # df.Pollutants = df.Pollutants.fillna('{}')  # if the NaN is in a column of strings
    df.Pollutants = df.Pollutants.fillna({i: {} for i in df.index})  # if the column is not strings
    
    # Convert the column of stringified dicts to dicts
    # skip this line, if the column contains dicts
    df.Pollutants = df.Pollutants.apply(literal_eval)
    
    # reset the index if the index is not unique integers from 0 to n-1
    # df.reset_index(inplace=True)  # uncomment if needed
    
    # normalize the column of dictionaries and join it to df
    df = df.join(pd.json_normalize(df.Pollutants))
    
    # drop Pollutants
    df.drop(columns=['Pollutants'], inplace=True)
    
    # display(df)
       Station ID    a    b    c
    0        8809   46    3   12
    1        8810   36    5    8
    2        8811  NaN    2    7
    3        8812  NaN  NaN   11
    4        8813   82  NaN   15
    5        8814  NaN  NaN  NaN
    

    %%timeit

    # dataframe with 1M rows
    dfb = pd.concat([df]*200000).reset_index(drop=True)
    
    %%timeit
    dfb.join(pd.json_normalize(dfb.Pollutants))
    [out]:
    5.44 s ± 32.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
    
    %%timeit
    pd.concat([dfb.drop(columns=['Pollutants']), dfb.Pollutants.apply(pd.Series)], axis=1)
    [out]:
    4min 17s ± 2.44 s per loop (mean ± std. dev. of 7 runs, 1 loop each)
    
    0 讨论(0)
  • 2020-11-22 03:37

    I strongly recommend the method extract the column 'Pollutants':

    df_pollutants = pd.DataFrame(df['Pollutants'].values.tolist(), index=df.index)

    it's much faster than

    df_pollutants = df['Pollutants'].apply(pd.Series)

    when the size of df is giant.

    0 讨论(0)
提交回复
热议问题