问题
I have a dataframe as follows, only with more rows:
import pandas as pd
data = {'First': ['First value', 'Second value','Third value'],
'Second': [['old','new','gold','door'], ['old','view','bold','door'],['new','view','world','window']]}
df = pd.DataFrame (data, columns = ['First','Second'])
To calculate the jaccard similarity i found this piece online(not my solution):
def lexical_overlap(doc1, doc2):
words_doc1 = set(doc1)
words_doc2 = set(doc2)
intersection = words_doc1.intersection(words_doc2)
union = words_doc1.union(words_doc2)
return float(len(intersection)) / len(union) * 100
what i would like to get as a result is for the measure to take each row of the Second column as doc and compare each pair iteratively and outputs a measure with the row name from the First column something like this :
First value and Second value = 80
First value and Third value = 95
Second value and Third value = 90
回答1:
Well, I'd do it somewhat like this:
from itertools import combinations
for val in list(combinations(range(len(df)), 2)):
firstlist = df.iloc[val[0],1]
secondlist = df.iloc[val[1],1]
value = round(lexical_overlap(firstlist,secondlist),2)
print(f"{df.iloc[val[0],0]} and {df.iloc[val[1],0]}'s value is: {value}")
Output:
First value and Second value's value is: 33.33
First value and Third value's value is: 14.29
Second value and Third value's value is: 14.29
回答2:
Since your data is not big, you can try broadcasting with slightly different approach:
# dummy for each rows
s = pd.get_dummies(df.Second.explode()).sum(level=0).values
# pair-wise jaccard
(s@s.T)/(s|s[:,None,:]).sum(-1) * 100
Output:
array([[100. , 33.33333333, 14.28571429],
[ 33.33333333, 100. , 14.28571429],
[ 14.28571429, 14.28571429, 100. ]])
来源:https://stackoverflow.com/questions/65308769/pandascalculate-jaccard-similarity-for-every-row-based-on-the-value-in-another