问题
Does anyone know how to properly quantify the success of an image detection algorithm? How do you combine the 2 sources of error? since one source is the number of objects that the algorithm failed to detect and the other is the number of false positives that the algorithm misidentified as the object.
So if for example there were 574 objects in the image but the algorithm only detected 540 of them while producing 113 false positives, how do I get the percent accuracy?
回答1:
You can calculate what is know as the F1 Score (sometimes just F Score) by first calculating the precision and recall performance of your algorithm.
The precision is the number of true positives divided by the number of predicted positives, where predicted positives = (true positives + false positives).
The recall is the number of true positives divided by the number of actual positives, where actual positives = (true positives + false negatives).
In other words, precision means, "Of all objects where we detected a match, what fraction actually does match?" And recall means "Of all objects that actually match, what fraction did we correctly detect as matching?".
Having calculated precision, P, and recall, R, the F1 Score is 2 * (PR / (P + R)) and gives you a single metric - between 0 and 1 - with which to compare the performance of different algorithms.
The F1 Score is a statistical measure used, among other applications, in machine learning. You can read more about it in this Wikipedia entry.
回答2:
Here are some measures/metrics that you can use to evaluate your model for image segmentation (or object detection:
- F1 Score
- Dice
- Shape similarity
All of the three are described in this page of a segmentation challenge
来源:https://stackoverflow.com/questions/20452101/how-to-measure-the-success-and-percent-accuracy-of-an-image-detection-algorithm