Trying work with the recently released Tensorflow Object Detection API, and was wondering how I could evaluate one of the pretrained models they provided in their model zoo? ex.
You can evaluate the pretrained models by running the eval.py script. It will ask you to point to a config file (which will be in the samples/configs
directory) and a checkpoint, and for this you will provide a path of the form .../.../model.ckpt
(dropping any extensions, like .meta
, or .data-00000-of-00001
).
You also have to create a file named "checkpoint" inside the directory that contains that checkpoint that you'd like to evaluate. Then inside that file write the following two lines:
model_checkpoint_path: “path/to/model.ckpt"
all_model_checkpoint_paths: “path/to/model.ckpt"
(where you modify path/to/ appropriately)
The number that you get at the end is mean Average Precision using 50% IOU as the cutoff threshold for true positives. This is slightly different than the metric that is reported in the model zoo, which uses the COCO mAP metric and averages over multiple IOU values.