I have been trying to cluster multiple datasets of URLs (around 1 million each), to find the original and the typos of each URL. I decided to use the levenshtein distance as a s
Try ELKI instead of sklearn.
It is the only tool I know that allows index accelerated DBSCAN with any metric.
It includes Levenshtein distance. You need to add an index to your database with -db.index
. I always use the cover tree index (you need to choose the same distance for the index and for the algorithm, of course!)
You could use "pyfunc" distances and ball trees in sklearn, but performance was really bad because of the interpreter. Also, DBSCAN in sklearn is much more memory intensive.
From the scikit-learn faq you can do this by making a custom metric:
from leven import levenshtein
import numpy as np
from sklearn.cluster import dbscan
data = ["ACCTCCTAGAAG", "ACCTACTAGAAGTT", "GAATATTAGGCCGA"]
def lev_metric(x, y):
i, j = int(x[0]), int(y[0]) # extract indices
return levenshtein(data[i], data[j])
X = np.arange(len(data)).reshape(-1, 1)
dbscan(X, metric=lev_metric, eps=5, min_samples=2)