I am dealing with a image classification problem. Before classification, images should be segmented. I tried several methods. My question is \"how can i test accuracy of seg
I think multiple measures should be used when you want to evaluate your segmentation result. The accuracy ( the ratio of the correctly segmented area over the ground truth) is not enough. Because your segmentation may also cover the area that is not in the ground truth. So, I suggest you can use the following measures to evaluate your segmentation result:
Measuring the quality of image segmentation is a topic well studied in the computer vision community.
You can see this method that is suitable for binary segmentations. There is also this method for multiple segments and also for boundary accuracy.
You can use jaccard_similarity_score as shown here: http://scikit-learn.org/stable/modules/generated/sklearn.metrics.jaccard_similarity_score.html But for images needs flattening the images for converting it into 1-D
A usual approach is to use the ratio of the total area of the correct position of the object compared to the area of the detected object that falls into the correct position.
If your areas are not uniform, it will be something like (pixels in the detected area that match the ground truth)/total number of pixels in the ground truth segmentation.
in the image below: count(gray)/(count(black+gray))
A measure you should consider is also a ratio of the detection area compared to the ground truth area, because you may have a detection that covers the whole image, and have a score of 100% accuracy on the above formula.
And how happy would you be if the ground truth object is detected in 1000 little segments that perfectly cover the area?