How to measure the success and percent accuracy of an image detection algorithm?

后端 未结 2 784
伪装坚强ぢ
伪装坚强ぢ 2021-01-20 12:05

Does anyone know how to properly quantify the success of an image detection algorithm? How do you combine the 2 sources of error? since one source is the number of objects t

相关标签:
2条回答
  • 2021-01-20 12:32

    You can calculate what is know as the F1 Score (sometimes just F Score) by first calculating the precision and recall performance of your algorithm.

    The precision is the number of true positives divided by the number of predicted positives, where predicted positives = (true positives + false positives).

    The recall is the number of true positives divided by the number of actual positives, where actual positives = (true positives + false negatives).

    In other words, precision means, "Of all objects where we detected a match, what fraction actually does match?" And recall means "Of all objects that actually match, what fraction did we correctly detect as matching?".

    Having calculated precision, P, and recall, R, the F1 Score is 2 * (PR / (P + R)) and gives you a single metric - between 0 and 1 - with which to compare the performance of different algorithms.

    The F1 Score is a statistical measure used, among other applications, in machine learning. You can read more about it in this Wikipedia entry.

    0 讨论(0)
  • 2021-01-20 12:44

    Here are some measures/metrics that you can use to evaluate your model for image segmentation (or object detection:

    • F1 Score
    • Dice
    • Shape similarity

    All of the three are described in this page of a segmentation challenge

    0 讨论(0)
提交回复
热议问题