What's the correct way to compute a confusion matrix for object detection?
问题 I am trying to compute a confusion matrix for my object detection model. However I seem to stumble across some pitfalls. My current approach is to compare each predicted box with each groundtruth box. If they have a IoU > some threshold, I insert the predictions into the confusion matrix. After the insertion I delete the element in the predictions list and move on to the next element. Because I also want the misclassified proposals to be inserted in the confusion matrix, I treat the elements