I've updated to Tensorflow 1.9 & the latest master of the Object Detection API. When running a training/evaluation session that worked fine previously (I think version 1.6), the training appears to proceed as expected, but I only get evaluation & metrics for one image (the first).
In Tensorboard the image is labeled 'Detections_Left_Groundtruth_Right'. The evaluation step itself also happens extremely quickly, which leads me to believe this isn't just a Tensorboard issue.
Looking in model_lib.py, I see some suspicious code (near line 349):
eval_images = (
features[fields.InputDataFields.original_image] if use_original_images
else features[fields.InputDataFields.image])
eval_dict = eval_util.result_dict_for_single_example(
eval_images[0:1],
features[inputs.HASH_KEY][0],
detections,
groundtruth,
class_agnostic=class_agnostic,
scale_to_absolute=True)
This reads to me like the evaluator is always running a single evaluation on the first image. Has anyone seen and/or fixed this? I will update if changing the above works.
You are right, object detection supports only batch sizes of 1 for evaluation. The number of evaluations is equal to the number of eval steps. Eval metrics are accrued across batches.
Btw, a change to view more eval images in Tensorboard was just submitted to master.
I have the same issue when using the model_main.py
module. When using the train.py and eval.py functions that can be found in the object_detection/legacy/
directory, however, I can see more than one image in tensorboard.
I didn't have enough time yet to go through the code to understand fully what is going on. I think this eval function is not calling the part of the code that you are quoting, because the images in tensorboard are different. Rather than having the left/right image pairs showing prediction/ground_truth, it is only the predicted bounding box that is shown.
来源:https://stackoverflow.com/questions/51636600/tensorflow-1-9-object-detection-model-main-py-only-evaluates-one-image