问题
Hope you could help me. I am new to python and pandas, so please bear with me. I am trying to find the common word between three data frames and I am using Jupiter Notebook.
Just for example:
df1=
A
dog
cat
cow
duck
snake
df2=
A
pig
snail
bird
dog
df3=
A
eagle
dog
snail
monkey
There is only one column in all data frames that is A. I would like to find
- the common word among all columns
- the words that are unique to their own columns and not in common.
Example:
duck is unique to df1, snail is unique to df2 and monkey is unique to df3.
I am using the below code to some use but not getting what I want straightforward,
df1[df1['A'].isin(df2['A']) & (df2['A']) & (df3['A'])]
Kindly let me know where I am going wrong. Cheers
回答1:
The problem with your current approach is that you need to chain multiple isin
calls. What's worse is that you'd need to keep track of which dataframe is the largest, and you call isin
on that one. Otherwise, it doesn't work.
To make things easy, you can use np.intersect1d
:
>>> np.intersect1d(df3.A, np.intersect1d(df1.A, df2.A))
array(['dog'], dtype=object)
Similar method using functools.reduce
+ intersect1d
by piRSquared:
>>> from functools import reduce # python 3 only
>>> reduce(np.intersect1d, [df1.A, df2.A, df3.A])
array(['dog'], dtype=object)
回答2:
Simplest way is to use set
intersection
list(set(df1.A) & set(df2.A) & set(df3.A))
['dog']
However if you have a long list of these things, I'd use reduce
from functools
. This same technique can be used with @cᴏʟᴅsᴘᴇᴇᴅ's use of np.intersect1d
as well.
from functools import reduce
list(reduce(set.intersection, map(set, [df1.A, df2.A, df3.A])))
['dog']
来源:https://stackoverflow.com/questions/46556169/finding-common-elements-between-multiple-dataframe-columns