问题
I have a list in python that contains duplicate dataframes. The goal is to remove these duplicate dataframes in whole. Here is some code:
import pandas as pd
import numpy as np
##Creating Dataframes
data1_1 =[[1,2018,80], [2,2018,70]]
data1_2 = [[1,2017,77], [3,2017,62]]
df1 = pd.DataFrame(data1_1, columns = ['ID', 'Year', 'Score'])
df2 = pd.DataFrame(data1_2, columns = ['ID', 'Year', 'Score'])
###Creating list with duplicates
all_df_list = [df1,df1,df1,df2,df2,df2]
The desired result is this:
###Desired results
desired_list = [df1,df2]
Is there a way to remove any duplicated dataframes within a python list?
Thank you
回答1:
We can use pandas DataFrame.equals with list comprehension
in combination with enumerate
to compare the items in the list between each other:
desired_list = [all_df_list[x] for x, _ in enumerate(all_df_list) if all_df_list[x].equals(all_df_list[x-1]) is False]
print(desired_list)
[ ID Year Score
0 1 2018 80
1 2 2018 70, ID Year Score
0 1 2017 77
1 3 2017 62]
DataFrame.equals
returns True
if the compared dataframes are equal:
df1.equals(df1)
True
df1.equals(df2)
False
Note
As Wen-Ben noted in the comments. Your list should be sorted like [df1, df1, df1, df2, df2, df2]
. Or with more df's: [df1, df1, df2, df2, df3, df3]
回答2:
I am doing with numpy.unique
_,idx=np.unique(np.array([x.values for x in all_df_list]),axis=0,return_index=True)
desired_list=[all_df_list[x] for x in idx ]
desired_list
Out[829]:
[ ID Year Score
0 1 2017 77
1 3 2017 62, ID Year Score
0 1 2018 80
1 2 2018 70]
回答3:
My first thought was to use a set, but dataframes are mutable and thus not hashable. Do you still need individual dataframes in your list, or is it useful to merge all of these into a single dataframe with all unique values?
You can pd.merge() them all into a single dataframe with unique values using reduce from functools:
from functools import reduce
reduced_df = reduce(lambda left, right: pd.merge(left, right, on=None, how='outer'),
all_df_list)
print(reduced_df)
# ID Year Score
# 0 1 2018 80
# 1 2 2018 70
# 2 1 2017 77
# 3 3 2017 62
回答4:
You just need to pass the list of duplicate df's
to pd.Series
and drop duplicate and convert it back to list
In [229]: desired_list = pd.Series(all_df_list).drop_duplicates().tolist()
In [230]: desired_list
Out[230]:
[ ID Year Score
0 1 2018 80
1 2 2018 70, ID Year Score
0 1 2017 77
1 3 2017 62]
The final desired_list
hold 2 dataframe equal to df1
, df2
In [231]: desired_list[0] == df1
Out[231]:
ID Year Score
0 True True True
1 True True True
In [232]: desired_list[1] == df2
Out[232]:
ID Year Score
0 True True True
1 True True True
来源:https://stackoverflow.com/questions/55735009/removing-duplicate-dataframes-in-a-list