Customize Distance Formular of K-means in Apache Spark Python

后端 未结 1 854
北恋
北恋 2021-01-14 03:03

Now I\'m using K-means for clustering and following this tutorial and API.

But I want to use custom formula for calculate distances. So how can I pass custom distan

1条回答
  •  被撕碎了的回忆
    2021-01-14 03:46

    In general using a different distance measure doesn't make sense, because k-means (unlike k-medoids) algorithm is well defined only for Euclidean distances.

    See Why does k-means clustering algorithm use only Euclidean distance metric? for an explanation.

    Moreover MLlib algorithms are implemented in Scala, and PySpark provides only the wrappers required to execute Scala code. Therefore providing a custom metric as a Python function, wouldn't be technically possible without significant changes in the API.

    Please note that since Spark 2.4 there are two built-in measures that can be used with pyspark.ml.clustering.KMeans and pyspark.ml.clustering.BisectingKMeans. (see DistanceMeasure Param).

    • euclidean for Euclidean distance.
    • cosine for cosine distance.

    Use at your own risk.

    0 讨论(0)
提交回复
热议问题