问题
After running h2o.deeplearning for a binary classification problem I then run the h2o.predict and obtain the following results
predict No Yes
1 No 0.9784425 0.0215575
2 Yes 0.4667428 0.5332572
3 Yes 0.3955087 0.6044913
4 Yes 0.7962034 0.2037966
5 Yes 0.7413591 0.2586409
6 Yes 0.6800801 0.3199199
I was hoping to get a confusion matrix with only two rows. But this seems to be quite different. How do I interpret these results? Is there any way of getting something like a confusion matrix with actual and predicted values and error percentage?
回答1:
You can either extract that information from the model fit (for example, if you pass a validation_frame
), or you can use h2o.performance()
to get obtain a H2OBinomialModel performance object and extract the confusion matrix using h2o.confusionMatrix()
.
Example:
fit <- h2o.deeplearning(x, y, training_frame = train, validation_frame = valid, ...)
h2o.confusionMatrix(fit, valid = TRUE)
Or
fit <- h2o.deeplearning(x, y, train, ...)
perf <- h2o.performance(fit, test)
h2o.confusionMatrix(perf)
来源:https://stackoverflow.com/questions/41075416/how-to-interpret-results-of-h2o-predict