Sklearn-GMM on large datasets

Question:

I have a large data-set (I can’t fit entire data on memory). I want to fit a GMM on this data set.

Can I use GMM.fit() (sklearn.mixture.GMM) repeatedly on mini batch of data ??

Asked By: abilng

||

Answers:

There is no reason to fit it repeatedly.
Just randomly sample as many data points as you think your machine can compute in a reasonable time. If variation is not very high, the random sample will have approximately the same distribution as the full dataset.

randomly_sampled = np.random.choice(full_dataset, size=10000, replace=False)
#If data does not fit in memory you can find a way to randomly sample when you read it

GMM.fit(randomly_sampled)

And the use

GMM.predict(full_dataset)
# Again you can fit one by one or batch by batch if you cannot read it in memory

on the rest to classify them.

Answered By: Gioelelm

fit will always forget previous data in scikit-learn. For incremental fitting, there is the partial_fit function. Unfortunately, GMM doesn’t have a partial_fit (yet), so you can’t do that.

Answered By: Andreas Mueller

I think you can set the init_para to empty string '' when you create the GMM object, then you might be able to train the whole data set.

Answered By: Huiyu

As Andreas Mueller mentioned, GMM doesn’t have partial_fit yet which will allow you to train the model in an iterative fashion. But you can make use of warm_start by setting it’s value to True when you create the GMM object. This allows you to iterate over batches of data and continue training the model from where you left it in the last iteration.