I run a python program that calls sklearn.metrics
\'s methods to calculate precision and F1 score. Here is the output when there is no predicted sample:
https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/classification.py
F1 = 2 * (precision * recall) / (precision + recall)
precision = TP/(TP+FP) as you've just said if predictor doesn't predicts positive class at all - precision is 0.
recall = TP/(TP+FN), in case if predictor doesn't predict positive class - TP is 0 - recall is 0.
So now you are dividing 0/0.
Precision, Recall, F1-score and Accuracy calculation
- In a given image of Dogs and Cats
* Total Dogs - 12 D = 12
* Total Cats - 8 C = 8
- Computer program predicts
* Dogs - 8
5 are actually Dogs T.P = 5
3 are not F.P = 3
* Cats - 12
6 are actually Cats T.N = 6
6 are not F.N = 6
- Calculation
* Precision = T.P / (T.P + F.P) => 5 / (5 + 3)
* Recall = T.P / D => 5 / 12
* F1 = 2 * (Precision * Recall) / (Precision + Recall)
* F1 = 0.5
* Accuracy = T.P + T.N / P + N
* Accuracy = 0.55
Wikipedia reference