Clustering with a distance matrix

后端 未结 3 1137
醉梦人生
醉梦人生 2020-12-16 17:17

I have a (symmetric) matrix M that represents the distance between each pair of nodes. For example,

    A   B   C   D   E   F   G   H   I   J   K   L         


        
相关标签:
3条回答
  • 2020-12-16 17:54

    One more possible way is using Partitioning Around Medoids which often called K-Medoids. If you look at R-clustering package you will see pam function which recieves distance matrix as input data.

    0 讨论(0)
  • 2020-12-16 18:02

    Hierarchical clustering works directly with the distance matrix instead of the actual observations. If you know the number of clusters, you will already know your stopping criterion (stop when there are k clusters). The main trick here will be to choose an appropriate linkage method. Also, this paper(pdf) gives an excellent overview of all kinds of clustering methods.

    0 讨论(0)
  • 2020-12-16 18:03

    Well, It is possible to perform K-means clustering on a given similarity matrix, at first you need to center the matrix and then take the eigenvalues of the matrix. The final and the most important step is multiplying the first two set of eigenvectors to the square root of diagonals of the eigenvalues to get the vectors and then move on with K-means . Below the code shows how to do it. You can change similarity matrix. fpdist is the similarity matrix.

    mds.tau <- function(H)
    {
      n <- nrow(H)
       P <- diag(n) - 1/n
       return(-0.5 * P %*% H %*% P)
      }
      B<-mds.tau(fpdist)
      eig <- eigen(B, symmetric = TRUE)
      v <- eig$values[1:2]
      #convert negative values to 0.
     v[v < 0] <- 0
    X <- eig$vectors[, 1:2] %*% diag(sqrt(v))
    library(vegan)
    km <- kmeans(X,centers= 5, iter.max=1000, nstart=10000) .
    #embedding using MDS
    cmd<-cmdscale(fpdist)
    
    0 讨论(0)
提交回复
热议问题