Python: clustering similar words based on word2vec

一曲冷凌霜 提交于 2019-12-12 04:54:20

问题


This might be the naive question which I am about to ask. I have a tokenized corpus on which I have trained Gensim's Word2vec model. The code is as below

site = Article("http://www.datasciencecentral.com/profiles/blogs/blockchain-and-artificial-intelligence-1")
site.download()
site.parse()

def clean(doc):
    stop_free = " ".join([i for i in word_tokenize(doc.lower()) if i not in stop])
    punc_free = ''.join(ch for ch in stop_free if ch not in exclude)
    normalized = " ".join(lemma.lemmatize(word) for word in punc_free.split())
    snowed = " ".join(snowball.stem(word) for word in normalized.split())
    return snowed   

b = clean(site.text)
model = gensim.models.Word2Vec([b],min_count=1,size=32)
print(model) ### Prints: Word2Vec(vocab=643, size=32, alpha=0.025) ####

To cluster similar words, I am using PCA to visualize the clusters of similar words. But the problem is that it is forming only big cluster as seen in the image.

PCA & scatter plot Code:

vocab = list(model.wv.vocab)
X = model[vocab]
pca = PCA(n_components=2)
X_pca = pca.fit_transform(X)

df = pd.concat([pd.DataFrame(X_pca),
                pd.Series(vocab)],
               axis=1)
df.columns = ['x','y','word']

fig = plt.figure()
ax = fig.add_subplot(1,1,1)
ax.scatter(df['x'],df['y'])
plt.show()

So, I have three questions here:

1) Is just one article enough to have the clear segregation of the clusters?

2) If I have a model trained with huge corpus and I want to predict the similar words in the new article and visualize them (i.e. words in the article I'm predicting) in the form of clusters, is there a way to do that?

I highly appreciate your suggestions. Thank you.


回答1:


  1. No, not really. For reference, common word2vec models which are trained on wikipedia (in english) consists around 3 billion words.
  2. You can use KNN (or something similar). Gensim has the most_similar function to get the closest words. Using a dimensional reduction (like PCA or tsne) you can get yourself a nice cluster. (Not sure if gensim has tsne module, but sklearn has, so you can use it)

btw you're referring to some image, but it's not available.



来源:https://stackoverflow.com/questions/45425070/python-clustering-similar-words-based-on-word2vec

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!