问题
I have a large data-set (I can't fit entire data on memory). I want to fit a GMM on this data set.
Can I use GMM.fit()
(sklearn.mixture.GMM
) repeatedly on mini batch of data ??
回答1:
There is no reason to fit it repeatedly. Just randomly sample as many data points as you think your machine can compute in a reasonable time. If variation is not very high, the random sample will have approximately the same distribution as the full dataset.
randomly_sampled = np.random.choice(full_dataset, size=10000, replace=False)
#If data does not fit in memory you can find a way to randomly sample when you read it
GMM.fit(randomly_sampled)
And the use
GMM.predict(full_dataset)
# Again you can fit one by one or batch by batch if you cannot read it in memory
on the rest to classify them.
回答2:
fit
will always forget previous data in scikit-learn. For incremental fitting, there is the partial_fit
function. Unfortunately, GMM
doesn't have a partial_fit
(yet), so you can't do that.
回答3:
I think you can set the init_para
to empty string ''
when you create the GMM
object, then you might be able to train the whole data set.
来源:https://stackoverflow.com/questions/29095769/sklearn-gmm-on-large-datasets