Does anyone know how to properly quantify the success of an image detection algorithm? How do you combine the 2 sources of error? since one source is the number of objects t
You can calculate what is know as the F1 Score (sometimes just F Score) by first calculating the precision and recall performance of your algorithm.
The precision is the number of true positives divided by the number of predicted positives, where predicted positives = (true positives + false positives).
The recall is the number of true positives divided by the number of actual positives, where actual positives = (true positives + false negatives).
In other words, precision means, "Of all objects where we detected a match, what fraction actually does match?" And recall means "Of all objects that actually match, what fraction did we correctly detect as matching?".
Having calculated precision, P, and recall, R, the F1 Score is 2 * (PR / (P + R)) and gives you a single metric - between 0 and 1 - with which to compare the performance of different algorithms.
The F1 Score is a statistical measure used, among other applications, in machine learning. You can read more about it in this Wikipedia entry.
Here are some measures/metrics that you can use to evaluate your model for image segmentation (or object detection:
All of the three are described in this page of a segmentation challenge