问题
I'm currently doing a speech recognition and machine learning related project. I have two classes now, and I create two GMM classifiers for each class, for labels 'happy' and 'sad'
I want to train GMM classifiers with MFCC vectors.
I am using two GMM classifiers for each label. (Previously it was GMM per file):
But every time I run the script I am having different results. What might be the cause for that with same test and train samples?
In the outputs below please note that I have 10 test samples and each line corresponds the results of the ordered test samples
Code:
classifiers = {'happy':[],'sad':[]}
probability = {'happy':0,'sad':0}
def createGMMClassifiers():
for name, data in training.iteritems():
#For every class: In our case it is two, happy and sad
classifier = mixture.GMM(n_components = n_classes,n_iter=50)
#two classifiers.
for mfcc in data:
classifier.fit(mfcc)
addClassifier(name, classifier)
for testData in testing['happy']:
classify(testData)
def addClassifier(name,classifier):
classifiers[name]=classifier
def classify(testMFCC):
for name, classifier in classifiers.iteritems():
prediction = classifier.predict_proba(testMFCC)
for f, s in prediction:
probability[name]+=f
print 'happy ',probability['happy'],'sad ',probability['sad']
Sample Output 1:
happy 154.300420496 sad 152.808941585
happy
happy 321.17737915 sad 318.621788517
happy
happy 465.294473363 sad 461.609246112
happy
happy 647.771003768 sad 640.451097035
happy
happy 792.420461416 sad 778.709674995
happy
happy 976.09526992 sad 961.337361541
happy
happy 1137.83592093 sad 1121.34722203
happy
happy 1297.14692405 sad 1278.51011583
happy
happy 1447.26926553 sad 1425.74595666
happy
happy 1593.00403707 sad 1569.85670672
happy
Sample Output 2:
happy 51.699579504 sad 152.808941585
sad
happy 81.8226208497 sad 318.621788517
sad
happy 134.705526637 sad 461.609246112
sad
happy 167.228996232 sad 640.451097035
sad
happy 219.579538584 sad 778.709674995
sad
happy 248.90473008 sad 961.337361541
sad
happy 301.164079068 sad 1121.34722203
sad
happy 334.853075952 sad 1278.51011583
sad
happy 378.730734469 sad 1425.74595666
sad
happy 443.995962929 sad 1569.85670672
sad
回答1:
But every time I run the script I am having different results. What might be the cause for that with same test and train samples?
scikit-learn uses random initializer. If you want reproducable results, you can set random_state argument
random_state: RandomState or an int seed (None by default)
for name, data in training.iteritems():
This is not correct since you train only on a last sample. You need to concatenate features per label into a single array before you run fit. You can use np.concatenate
for that.
来源:https://stackoverflow.com/questions/38032454/having-different-results-every-run-with-gmm-classifier