I have data saved in a postgreSQL
database. I am querying this data using Python2.7 and turning it into a Pandas DataFrame. However, the last column of this dat
pd.json_normalize(df.Pollutants)
is significantly faster than df.Pollutants.apply(pd.Series)
%%timeit
below. For 1M rows, .json_normalize
is 47 times faster than .apply
.dict
column has dict
or str
type.
dict
type, using ast.literal_eval.dicts
, with keys
as headers and values
for rows.
record_path
& meta
) for dealing with nested dicts
.df
, with the columns created using pd.json_normalize
dicts
NaN
, they must be filled with an empty dict
df.Pollutants = df.Pollutants.fillna({i: {} for i in df.index})
'Pollutants'
column is strings, use '{}'
.import pandas as pd
from ast import literal_eval
import numpy as np
data = {'Station ID': [8809, 8810, 8811, 8812, 8813, 8814],
'Pollutants': ['{"a": "46", "b": "3", "c": "12"}', '{"a": "36", "b": "5", "c": "8"}', '{"b": "2", "c": "7"}', '{"c": "11"}', '{"a": "82", "c": "15"}', np.nan]}
df = pd.DataFrame(data)
# display(df)
Station ID Pollutants
0 8809 {"a": "46", "b": "3", "c": "12"}
1 8810 {"a": "36", "b": "5", "c": "8"}
2 8811 {"b": "2", "c": "7"}
3 8812 {"c": "11"}
4 8813 {"a": "82", "c": "15"}
5 8814 NaN
# replace NaN with '{}' if the column is strings, otherwise replace with {}
# df.Pollutants = df.Pollutants.fillna('{}') # if the NaN is in a column of strings
df.Pollutants = df.Pollutants.fillna({i: {} for i in df.index}) # if the column is not strings
# Convert the column of stringified dicts to dicts
# skip this line, if the column contains dicts
df.Pollutants = df.Pollutants.apply(literal_eval)
# reset the index if the index is not unique integers from 0 to n-1
# df.reset_index(inplace=True) # uncomment if needed
# normalize the column of dictionaries and join it to df
df = df.join(pd.json_normalize(df.Pollutants))
# drop Pollutants
df.drop(columns=['Pollutants'], inplace=True)
# display(df)
Station ID a b c
0 8809 46 3 12
1 8810 36 5 8
2 8811 NaN 2 7
3 8812 NaN NaN 11
4 8813 82 NaN 15
5 8814 NaN NaN NaN
%%timeit
# dataframe with 1M rows
dfb = pd.concat([df]*200000).reset_index(drop=True)
%%timeit
dfb.join(pd.json_normalize(dfb.Pollutants))
[out]:
5.44 s ± 32.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%%timeit
pd.concat([dfb.drop(columns=['Pollutants']), dfb.Pollutants.apply(pd.Series)], axis=1)
[out]:
4min 17s ± 2.44 s per loop (mean ± std. dev. of 7 runs, 1 loop each)