问题
I'm trying to use TF-IDF to sort documents into categories. I've calculated the tf_idf for some documents, but now when I try to calculate the Cosine Similarity between two of these documents I get a traceback saying:
#len(u)==201, len(v)==246
cosine_distance(u, v)
ValueError: objects are not aligned
#this works though:
cosine_distance(u[:200], v[:200])
>> 0.52230249969265641
Is slicing the vector so that len(u)==len(v) the right approach? I would think that cosine similarity would work with vectors of different lengths.
I'm using this function:
def cosine_distance(u, v):
"""
Returns the cosine of the angle between vectors v and u. This is equal to
u.v / |u||v|.
"""
return numpy.dot(u, v) / (math.sqrt(numpy.dot(u, u)) * math.sqrt(numpy.dot(v, v)))
Also -- is the order of the tf_idf values in the vectors important? Should they be sorted -- or is it of no importance for this calculation?
回答1:
Are you computing the cosine similarity of term vectors? Term vectors should be the same length. If words aren't present in a document then it should have a value of 0 for that term.
I'm not exactly sure what vectors you're applying cosine similarity for but when doing cosine similarity then your vectors should always be the same length and order very much does matter.
Example:
Term | Doc1 | Doc2
Foo .3 .7
Bar | 0 | 8
Baz | 1 | 1
Here you have two vectors (.3,0,1) and (.7,8,1) and can compute the cosine similarity between them. If you compared (.3,1) and (.7,8) you'd be comparing the Doc1 score of Baz against the Doc2 score of Bar which wouldn't make sense.
回答2:
You need multiply the entries for corresponding words in the vector, so there should be a global order for the words. This means that in theory your vectors should be the same length.
In practice, if one document was seen before the other, words in the second document may have been added to the global order after the first document was seen, so even though the vectors have the same order, the first document may be shorter, since it doesn't have entries for the words that weren't in that vector.
Document 1: The quick brown fox jumped over the lazy dog.
Global order: The quick brown fox jumped over the lazy dog
Vector for Doc 1: 1 1 1 1 1 1 1 1 1
Document 2: The runner was quick.
Global order: The quick brown fox jumped over the lazy dog runner was
Vector for Doc 1: 1 1 1 1 1 1 1 1 1
Vector for Doc 2: 1 1 0 0 0 0 0 0 0 1 1
In this case, in theory you need to pad the Document 1 vector with zeroes on the end. In practice, when computing the dot product, you only need to multiply elements up to the end of Vector 1 (since omitting the extra elements of vector 2 and multiplying them by zero are exactly the same, but visiting the extra elements is slower).
Then you can compute the magnitude of each vector separately, and for that the vectors don't need to be of the same length.
回答3:
Try building the vectors before feeding them to the cosine_distance function:
import math
from collections import Counter
from nltk import cluster
def buildVector(iterable1, iterable2):
counter1 = Counter(iterable1)
counter2= Counter(iterable2)
all_items = set(counter1.keys()).union( set(counter2.keys()) )
vector1 = [counter1[k] for k in all_items]
vector2 = [counter2[k] for k in all_items]
return vector1, vector2
l1 = "Julie loves me more than Linda loves me".split()
l2 = "Jane likes me more than Julie loves me or".split()
v1,v2= buildVector(l1, l2)
print(cluster.util.cosine_distance(v1,v2))
来源:https://stackoverflow.com/questions/3121217/cosine-similarity-of-vectors-of-different-lengths