Scikit K-means clustering performance measure

前端 未结 3 654
自闭症患者
自闭症患者 2021-02-04 08:42

I\'m trying to do a clustering with K-means method but I would like to measure the performance of my clustering. I\'m not an expert but I am eager to learn more about clustering

相关标签:
3条回答
  • 2021-02-04 08:45

    Normally, clustering is considered as an Unsupervised method, thus is difficult to establish a good performance metric (as also suggested in the previous comments).

    Nevertheless, much useful information can be extrapolated from these algorithms (e.g. k-means). The problem is how to assign a semantics to each cluster, and thus measure the "performance" of your algorithm. In many cases, a good way to proceed is through a visualization of your clusters. Obviously, if your data have high dimensional features, as in many cases happen, the visualization is not that easy. Let me suggest two way to go, using k-means and another clustering algorithm.

    • K-mean: in this case, you can reduce the dimensionality of your data by using for example PCA. Using such algorithm, you can plot the data in a 2D plot and then visualize your clusters. However, what you see in this plot is a projection in a 2D space of your data, so can be not very accurate, but still can give you an idea of how your clusters are distributed.

    • Self-organizing map this is a clustering algorithm based on Neural Networks which create a discretized representation of the input space of the training samples, called a map, and is, therefore, a method to do dimensionality reduction (SOM). You can find a very nice python package called somoclu which has got this algorithm implemented and an easy way to visualize the result. This algorithm is very good for clustering also because does not require a priori selection of the number of cluster (in k-mean you need to choose k, here no).

    0 讨论(0)
  • 2021-02-04 08:48

    As you said, only Silhouette Coefficient and Calinski-Harabaz Index exist in scikit-learn. For Dunn index you may use either this or this link.

    0 讨论(0)
  • 2021-02-04 09:06

    Apart from Silhouette Score, Elbow Criterion can be used to evaluate K-Mean clustering. It is not available as a function/method in Scikit-Learn. We need to calculate SSE to evaluate K-Means clustering using Elbow Criterion.

    The idea of the Elbow Criterion method is to choose the k(no of cluster) at which the SSE decreases abruptly. The SSE is defined as the sum of the squared distance between each member of the cluster and its centroid.

    Calculate Sum of Squared Error(SSE) for each value of k, where k is no. of cluster and plot the line graph. SSE tends to decrease toward 0 as we increase k (SSE=0, when k is equal to the no. of data points in the dataset, because then each data point is its own cluster, and there is no error between it and the center of its cluster).

    So the goal is to choose a small value of k that still has a low SSE, and the elbow usually represents, where we start to have diminishing returns by increasing k.

    Iris dataset example:

    import pandas as pd
    from sklearn.datasets import load_iris
    from sklearn.cluster import KMeans
    import matplotlib.pyplot as plt
    
    iris = load_iris()
    X = pd.DataFrame(iris.data, columns=iris['feature_names'])
    #print(X)
    data = X[['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)']]
    
    sse = {}
    for k in range(1, 10):
        kmeans = KMeans(n_clusters=k, max_iter=1000).fit(data)
        data["clusters"] = kmeans.labels_
        #print(data["clusters"])
        sse[k] = kmeans.inertia_ # Inertia: Sum of distances of samples to their closest cluster center
    plt.figure()
    plt.plot(list(sse.keys()), list(sse.values()))
    plt.xlabel("Number of cluster")
    plt.ylabel("SSE")
    plt.show()
    

    If the line graph looks like an arm - a red circle in above line graph (like angle), the "elbow" on the arm is the value of optimal k (number of cluster). According to above elbow in line graph, number of optimal cluster is 3.

    Note: Elbow Criterion is heuristic in nature, and may not work for your data set. Follow intuition according to dataset and the problem your are trying to solve.

    Hope it helps!

    0 讨论(0)
提交回复
热议问题