Sklearn-GMM on large datasets

老子叫甜甜 提交于 2019-12-11 02:48:33

问题


I have a large data-set (I can't fit entire data on memory). I want to fit a GMM on this data set.

Can I use GMM.fit() (sklearn.mixture.GMM) repeatedly on mini batch of data ??


回答1:


There is no reason to fit it repeatedly. Just randomly sample as many data points as you think your machine can compute in a reasonable time. If variation is not very high, the random sample will have approximately the same distribution as the full dataset.

randomly_sampled = np.random.choice(full_dataset, size=10000, replace=False)
#If data does not fit in memory you can find a way to randomly sample when you read it

GMM.fit(randomly_sampled)

And the use

GMM.predict(full_dataset)
# Again you can fit one by one or batch by batch if you cannot read it in memory

on the rest to classify them.




回答2:


fit will always forget previous data in scikit-learn. For incremental fitting, there is the partial_fit function. Unfortunately, GMM doesn't have a partial_fit (yet), so you can't do that.




回答3:


I think you can set the init_para to empty string '' when you create the GMM object, then you might be able to train the whole data set.



来源:https://stackoverflow.com/questions/29095769/sklearn-gmm-on-large-datasets

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!