Is there an efficient way to group nearby locations based on longitude and latitude?

房东的猫 提交于 2019-12-24 03:45:10

问题


I'm trying to figure out a way to cluster multiple addresses based on proximity. I have latitude and longitude, which in this case is ideal, as some of the clusters would cross City/Zip boundaries. What I would have as a starting point is similar to this, but up to 10,000 rows within the table:

Hospital.Addresses <- tibble(Hospital_Name = c("Massachusetts General Hospital","MGH - Blake Building","Shriners Hospitals for Children — Boston","Yale-New Haven Medical Center", "Memorial Sloan Kettering", "MSKCC Urgent Care Center", "Memorial Sloan Kettering Blood Donation Room"),
  Address = c("55 Fruit St", "100 Blossom St", "51 Blossom St", "York St", "1275 York Ave", "425 E 67th St", "1250 1st Avenue Between 67th and 68th Streets"),
  City = c("Boston", "Boston", "Boston", "New Haven", "New York", "New York", "New York"),
  State = c("MA", "MA", "MA", "CT", "NY", "NY","NY"),
  Zip = c("02114","02114","02114", "06504", "10065", "10065", "10065"),
  Latitude = c(42.363230, 42.364030, 42.363090, 41.304507, 40.764390, 40.764248, 40.764793),
  Longitude = c(-71.068680, -71.069430, -71.066630, -72.936781, -73.956810, -73.957127, -73.957818))

I would like to cluster the groups of addresses that are within ~1 mile of each other, potentially without calculating the Haversine distance between 10,000 individual points. We could potentially make the math easy and roughly estimate 1 mile as 0.016 degrees of either latitude or longitude.

An ideal output would be something that validates the 3 hospital locations in Boston are in Group 1 (all within 1 mile of each other), the hospital in New Haven is on it's own in Group 2 (not within 1 mile of anything else), and the 3 hospital locations in NY are all in Group 3 (all within 1 mile of each other).

Instead of group_by(), I'm more looking for group_near().

Any suggestions are greatly appreciated!


回答1:


Actually the distm function from the geosphere package can handle 10,000 pairs in just a couple of minutes, on my machine not terribly bad compared to the time it took to write this solution. The dist matrix for 10,000 random points consumed less than a gig of memory.

Performing clustering with the hclust and using the distance matrix generated from the geosphere package can clearly show the nearness of each point.

#create fake data
lat<-runif(10000, min=28, max=42)
long<-runif(10000, min=-109, max=-71)
df<-data.frame(long, lat)

library(geosphere)

start<-Sys.time()
#create a distance matrix in miles
dmat<-distm(df)/1000*.62
print(Sys.time()-start)

#cluster
clusted<-hclust(as.dist(dmat))
#plot(clusted)
#find the clusters ids for 2 mile distances
clustersIDs<-(cutree(clusted, h=2))


来源:https://stackoverflow.com/questions/58737822/is-there-an-efficient-way-to-group-nearby-locations-based-on-longitude-and-latit

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!