问题
I have a rather messy nested dictionary that I am trying to convert to a pandas data frame. The data is stored in a dictionary of lists contained in a broader dictionary, where each key/value breakdown follows:
{userID_key: {postID_key: [list of hash tags]}}
Here's a more specific example of what the data looks like:
{'user_1': {'postID_1': ['#fitfam',
'#gym',
'#bro'],
'postID_2': ['#swol',
'#anotherhashtag']},
'user_2': {'postID_78': ['#ripped',
'#bro',
'#morehashtags'],
'postID_1': ['#buff',
'#othertags']},
'user_3': ...and so on }
I want to create a data frame that gives me the frequency counts of each hashtag for each (userID,postID) pair like below:
+------------+------------+--------+-----+-----+------+-----+
| UserID_key | PostID_key | fitfam | gym | bro | swol | ... |
+------------+------------+--------+-----+-----+------+-----+
| user_1 | postID_1 | 1 | 1 | 1 | 0 | ... |
| user_1 | postID_2 | 0 | 0 | 0 | 1 | ... |
| user_2 | postID_78 | 0 | 0 | 1 | 0 | ... |
| user_2 | postID_1 | 0 | 0 | 0 | 0 | ... |
| user_3 | ... | ... | ... | ... | ... | ... |
+------------+------------+--------+-----+-----+------+-----+
I had scikit-learn's CountVectorizer
as an idea but it's not going to be able to process a nested dictionary. Would appreciate any help getting it into that desired form.
回答1:
Building on my answer to another question, you can build and concatenate sub-frames using pd.concat
, then use stack
and get_dummies
:
(pd.concat({k: pd.DataFrame.from_dict(v, orient='index') for k, v in dct.items()})
.stack()
.str.get_dummies()
.sum(level=[0, 1]))
#anotherhashtag #bro #buff #fitfam #gym #morehashtags #othertags #ripped #swol
user_1 postID_1 0 1 0 1 1 0 0 0 0
postID_2 1 0 0 0 0 0 0 0 1
user_2 postID_78 0 1 0 0 0 1 0 1 0
postID_1 0 0 1 0 0 0 1 0 0
来源:https://stackoverflow.com/questions/54509358/nested-dict-of-lists-to-pandas-dataframe