How to run a large matrix for cosine similarity in Python?

一笑奈何 提交于 2019-12-10 10:36:35

问题


I want to calculate cosine similarity between articles. And I am running into the problem that my implementation approach would take a long time for the size of the data that I am going to run.

from scipy import spatial
import numpy as np 
from numpy import array
import sklearn
from sklearn.metrics.pairwise import cosine_similarity 

I = [[3, 45, 7, 2],[2, 54, 13, 15], [2, 54, 1, 13]]

II = [2, 54, 13, 15]

print cosine_similarity(II, I)

With the example above, to calculate I and II already took 1.0s and the dimension of my data is around (100K, 2K).

Is there other packages that I could use to run a huge matrix?


回答1:


You can use pairwise_kernels with metric='cosine' and n_jobs = . That will divide the data and run it in parallel




回答2:


With sklearn.preprocessing.normalize, this works faster for me

result = np.dot(normalize(II, axis=1), normalize(I, axis=1).T)

(dot product between unit-normalized vectors is equivalent to cosine similarity).



来源:https://stackoverflow.com/questions/34890861/how-to-run-a-large-matrix-for-cosine-similarity-in-python

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!