Try this k-means variation:
Initialization:
- choose
k
centers from the dataset at random, or even better using kmeans++ strategy
- for each point, compute the distance to its nearest cluster center, and build a heap for this
- draw points from the heap, and assign them to the nearest cluster, unless the cluster is already overfull. If so, compute the next nearest cluster center and reinsert into the heap
In the end, you should have a paritioning that satisfies your requirements of the +-1 same number of objects per cluster (make sure the last few clusters also have the right number. The first m
clusters should have ceil
objects, the remainder exactly floor
objects.)
Note that using a heap ensures the clusters remain convex: if they were no longer convex, there would have been a better swap candidate.
Iteration step:
Requisites: a list for each cluster with "swap proposals" (objects that would prefer to be in a different cluster).
E step: compute the updated cluster centers as in regular k-means
M step: Iterating through all points (either just one, or all in one batch)
Compute nearest cluster center to object / all cluster centers that are closer than the current clusters. If it is a different cluster:
- If the other cluster is smaller than the current cluster, just move it to the new cluster
- If there is a swap proposal from the other cluster (or any cluster with a lower distance), swap the two element cluster assignments (if there is more than one offer, choose the one with the largest improvement)
- otherwise, indicate a swap proposal for the other cluster
The cluster sizes remain invariant (+- the ceil/floor difference), an objects are only moved from one cluster to another as long as it results in an improvement of the estimation. It should therefore converge at some point like k-means. It might be a bit slower (i.e. more iterations) though.
I do not know if this has been published or implemented before. It's just what I would try (if I would try k-means. there are much better clustering algorithms.)