Clustering Lat/Longs in a Database

前端 未结 6 1792
南旧
南旧 2020-12-23 23:18

I\'m trying to see if anyone knows how to cluster some Lat/Long results, using a database, to reduce the number of results sent over the wire to the application.

The

相关标签:
6条回答
  • 2020-12-23 23:42

    For movielandmarks.com I used the clustering code from Mike Purvis, one of the authors of Beginning Google Maps Applications with PHP and AJAX. It builds trees of clusters/points for different zoom levels using PHP and MySQL, storing it in the database so that recall is very fast. Some of it may be useful to you even if you are using a different database.

    0 讨论(0)
  • 2020-12-23 23:43

    Why not testing multiple approaches?

    1. translate the weka library in .NET CLI with IKVM.NET
    2. add an assembly resulted from your code and weka.dll (use ilmerge) into your database

    Make some tests, that is. No specific clustering works better than anyone else.

    0 讨论(0)
  • 2020-12-23 23:56

    I did a similar thing for a geographic application where I wanted to ensure I could cache point sets easily. My geohashing code looks like this:

    def compute_chunk(latitude, longitude)
      (floor_lon(longitude) * 0x1000) | floor_lat(latitude)
    end
    
    def floor_lon(longitude)
      ((longitude + 180) * 10).to_i
    end
    
    def floor_lat(latitude)
      ((latitude + 90) * 10).to_i
    end
    

    Everything got really easy from there. I had some code for grabbing all of the chunks from a given point to a given radius that would translate into a single memcache multiget (and some code to backfill that when it was missing).

    0 讨论(0)
  • 2020-12-24 00:00

    I believe you can use MSSQL's spatial data types. If they are similar to other spatial data types I know, they will store your points in a tree of rectangles, and then you can go to the lower-resolution rectangles to get implicit clusters.

    0 讨论(0)
  • 2020-12-24 00:07

    I'd probably use a modified* version of k-means clustering using the cartesian (e.g. WGS-84 ECF) coordinates for your points. It's easy to implement & converges quickly, and adapts to your data no matter what it looks like. Plus, you can pick k to suit your bandwidth requirements, and each cluster will have the same number of associated points (mod k).

    I'd make a table of cluster centroids, and add a field to the original data table to indicate what cluster it belonged too. You'd obviously want to update the clustering periodically if your data is at all dynamic. I don't know if you could do that with a stored procedure & trigger, but perhaps.

    *The "modification" would be to adjust the length of the computed centroid vectors so they'd be on the surface of the earth. Otherwise you'd end up with a bunch of points with negative altitude (when converted back to LLH).

    0 讨论(0)
  • 2020-12-24 00:07

    If you're clustering on geographic location, and I can't imagine it being anything else :-), you could store the "cluster ID" in the database along with the lat/long co-ordinates.

    What I mean by that is to divide the world map into (for example) a 100x100 matrix (10,000 clusters) and each co-ordinate gets assigned to one of those clusters.

    Then, you can detect very close coordinates by selecting those in the same square and moderately close ones by selecting those in adjacent squares.

    The size of your squares (and therefore the number of them) will be decided by how accurate you need the clustering to be. Obviously, if you only have a 2x2 matrix, you could get some clustering of co-ordinates that are a long way apart.

    You will always have the edge cases such as two points close together but in different clusters (one northernmost in one cluster, the other southernmost in another) but you could adjust the cluster size OR post-process the results on the client side.

    0 讨论(0)
提交回复
热议问题