I am a newbie in text mining, here is my situation. Suppose i have a list of words [\'car\', \'dog\', \'puppy\', \'vehicle\'], i would like to cluster words into k groups, I
Following up the answer by Brian O'Donnell, once you've computed the semantic similarity with word2vec (or FastText or GLoVE, ...), you can then cluster the matrix using sklearn.clustering. I've found that for small matrices, spectral clustering gives the best results.
It's worth keeping in mind that the word vectors are often embedded on a high-dimensional sphere. K-means with a Euclidean distance matrix fails to capture this, and may lead to poor results for the similarity of words that aren't immediate neighbors.
If you want to cluster words by their "semantic similarity" (i.e. likeness of their meaning) take a look at Word2Vec and GloVe. Gensim has an implementation for Word2Vec. This web page, "Word2Vec Tutorial", by Radim Rehurek gives a tutorial on using Word2Vec to determine similar words.
Adding on to what's already been said regarding similarity scores, finding k
in clustering applications generally is aided by scree plots (also known as an "elbow curve"). In these plots, you'll usually have some measure of dispersion between clusters on the y-axis, and the number of clusters on the x-axis. Finding the minimum (second derivative) of the curve in the scree plot gives you a more objective measure of cluster "uniqueness."