How can I analyze a confusion matrix?

大憨熊 提交于 2019-12-04 03:55:24

问题


When I print out scikit-learn's confusion matrix, I receive a very huge matrix. I want to analyze what are the true positives, true negatives etc. How can I do so? This is how my confusion matrix looks like. I wish to understand this better.

[[4015  336    0 ...,    0    0    2]
 [ 228 2704    0 ...,    0    0    0]
 [   4    7   19 ...,    0    0    0]
 ..., 
 [   3    2    0 ...,    5    0    0]
 [   1    1    0 ...,    0    0    0]
 [  13    1    0 ...,    0    0   11]]

回答1:


IIUC, your question is undefined. "False positives", "true negatives" - these are terms that are defined only for binary classification. Read more about the definition of a confusion matrix.

In this case, the confusion matrix is of dimension N X N. Each diagonal represents, for entry (i, i) the case where the prediction is i and the outcome is i too. Any other off-diagonal entry indicates some mistake where the prediction was i and the outcome is j. There is no meaning to "positive" and "negative in this case.

You can find the diagnoal elements easily using np.diagonal, and, following that, it is easy to sum them. The sum of wrong cases is the sum of the matrix minus the sum of the diagonal.




回答2:


Terms like true positive,false positive, etc. refer to binary classification. Whereas the dimensionality of your confusion matrix is greater then two. So you can talk only about the number of observations known to be in group i but predicted to be in group j (definition of confusion matrix).




回答3:


Approach 1: Binary Classification

from sklearn.metrics import confusion_matrix as cm
import pandas as pd

y_test = [1, 0, 0]
y_pred = [1, 0, 0]
confusion_matrix=cm(y_test, y_pred)

list1 = ["Actual 0", "Actual 1"]
list2 = ["Predicted 0", "Predicted 1"]
pd.DataFrame(confusion_matrix, list1,list2)

Approach 2: Multiclass Classification

While sklearn.metrics.confusion_matrix provides a numeric matrix, you can generate a 'report' using the following:

import pandas as pd
y_true = pd.Series([2, 0, 2, 2, 0, 1, 1, 2, 2, 0, 1, 2])
y_pred = pd.Series([0, 0, 2, 1, 0, 2, 1, 0, 2, 0, 2, 2])

pd.crosstab(y_true, y_pred, rownames=['True'], colnames=['Predicted'], margins=True)

which results in:

Predicted  0  1  2  All
True                   
0          3  0  0    3
1          0  1  2    3
2          2  1  3    6
All        5  2  5   12

This allows us to see that:

  1. The diagonal elements show the number of correct classifications for each class: 3, 1 and 3 for the classes 0, 1 and 2.
  2. The off-diagonal elements provides the misclassifications: for example, 2 of the class 2 were misclassified as 0, none of the class 0 were misclassified as 2, etc.
  3. The total number of classifications for each class in both y_true and y_pred, from the "All" subtotals

This method also works for text labels, and for a large number of samples in the dataset can be extended to provide percentage reports.



来源:https://stackoverflow.com/questions/35062665/how-can-i-analyze-a-confusion-matrix

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!