I am trying to compute area under the ROC curve using sklearn.metrics.roc_auc_score
using the following method:
roc_auc = sklearn.metrics.roc_auc_sc
This is because you are passing in the decisions of you classifier instead of the scores it calculated. There was a question on this on SO recently and a related pull request to scikit-learn
.
The point of a ROC curve (and the area under it) is that you study the precision-recall tradeoff as the classification threshold is varied. By default in a binary classification task, if your classifier's score is > 0.5
, then class1
is predicted, otherwise class0
is predicted. As you change that threshold, you get a curve like this. The higher up the curve is (more area under it), the better that classifier. However, to get this curve you need access to the scores of a classifier, not its decisions. Otherwise whatever the decision threshold is, the decision stay the same, and AUC degenerates to accuracy.
Which classifier are you using?