DBSCAN on spark : which implementation

后端 未结 4 2188
名媛妹妹
名媛妹妹 2020-12-28 21:09

I would like to do some DBSCAN on Spark. I have currently found 2 implementations:

  • https://github.com/irvingc/dbscan-on-spark
  • https://github.com/alito
相关标签:
4条回答
  • 2020-12-28 21:39

    Please try ELKI. As this is Java, it should be easy to call from Scala.

    ELKI is very well optimized, and with indexes it will scale to quite large data sets.

    We tried to include one of these Spark implementations in our benchmarking study - but it ran out of memory (and it was the only implementation to run out of memory ... the k-means of Spark and Mahout were also among the slowest):

    Hans-Peter Kriegel, Erich Schubert, and Arthur Zimek.
    The (black) art of runtime evaluation: Are we comparing algorithms or implementations?
    In: Knowledge and Information Systems (KAIS). 2016, 1–38

    Professor Neukirchen benchmarked parallel implementations of DBSCAN in this technical report:

    Helmut Neukirchen
    Survey and Performance Evaluation of DBSCAN Spatial Clustering Implementations for Big Data and High-Performance Computing Paradigms

    apparently he got some of the Spark implementations working, but noted that:

    The result is devastating: none of the implementations for Apache Spark is anywhere near to the HPC implementations. In particular on bigger (but still rather small) data sets, most of them fail completely and do not even deliver correct results.

    and earlier:

    When running any of the "Spark DBSCAN" implementations while making use of all available cores of our cluster, we experienced out-of-memory exceptions.

    (also, "Spark DBSCAN" took 2406 seconds on 928 cores, ELKI took 997 seconds on 1 core for the smaller benchmark - the other Spark implementation didn't fare too well either, in particular it did not return the correct result...)

    "DBSCAN on Spark" did not crash, but returned completely wrong clusters.

    While "DBSCAN on Spark" finishes faster, it delivered completely wrong clustering results. Due to the hopelessly long run-times of the DBSCAN implementations for Spark already with the maximum number of cores, we did not perform measurements with a lower number of cores.

    You can wrap a double[][] array as ELKI database:

    // Adapter to load data from an existing array.
    DatabaseConnection dbc = new ArrayAdapterDatabaseConnection(data);
    // Create a database (which may contain multiple relations!)
    Database db = new StaticArrayDatabase(dbc, null);
    // Load the data into the database (do NOT forget to initialize...)
    db.initialize();
    
    // Squared Euclidean is faster than Euclidean.
    Clustering<Model> c = new DBSCAN<NumberVector>(
      SquaredEuclideanDistanceFunction.STATIC, eps*eps, minpts).run(db);
    
    for(Cluster<KMeansModel> clu : c.getAllClusters()) {
      // Process clusters
    }
    

    See also: Java API example (in particular, how to map DBIDs back to row indexes). For better performance, pass an index factory (such as new CoverTree.Factory(...)) as second parameter to the StaticArrayDatabase constructor.

    0 讨论(0)
  • 2020-12-28 21:46

    I successfully use the second library (https://github.com/alitouka/spark_dbscan) in my project.Actually,I can't use it as follows:

    libraryDependencies += "org.alitouka" % "spark_dbscan_2.10" % "0.0.4"
    
    resolvers += "Aliaksei Litouka's repository" at "http://alitouka-public.s3-website-us-east-1.amazonaws.com/"
    

    Instead, I download the code and update it to spark 2.2.1 version.Besides,some of the libraries should be added.Finally, add the code to my project, it works!

    0 讨论(0)
  • 2020-12-28 21:52

    You can also consider using smile which provides an implementation of DBSCAN. You would have to use groupBy combined with either mapGroups or flatMapGroups in the most direct way and you would run dbscan there. Here's an example:

      import smile.clustering._
    
      val dataset: Array[Array[Double]] = Array(
        Array(100, 100),
        Array(101, 100),
        Array(100, 101),
        Array(100, 100),
        Array(101, 100),
        Array(100, 101),
    
        Array(0, 0),
        Array(1, 0),
        Array(1, 2),
        Array(1, 1)
      )
    
      val dbscanResult = dbscan(dataset, minPts = 3, radius = 5)
      println(dbscanResult)
    
      // output
      DBSCAN clusters of 10 data points:
      0     6 (60.0%)
      1     4 (40.0%)
      Noise     0 ( 0.0%)
    

    You can also write a User Defined Aggregate Function (UDAF) if you need to eek out more performance.

    I use this approach at work to do clustering of time-series data, so grouping using Spark's time window function and then being able to execute DBSCAN within each window allows us to parallelize the implementation.

    I was inspired by the following article to do this

    0 讨论(0)
  • 2020-12-28 22:01

    I tested https://github.com/irvingc/dbscan-on-spark and can say that it consumes a lot of memory. For 400K dataset with smooth distribution i used -Xmx12084m and even in this case it works too long (>20 min). In addition, it is only fo 2D. I used project with maven, not sbt.

    I tested also second implementation. This is still the best that I found. Unfortunately, the author does not support it since 2015. It really took some time to raise the version of the Spark and resolve the version conflicts. I needed it to deploy on aws.

    0 讨论(0)
提交回复
热议问题