object-detection-api

Tensorflow 1.9 / Object Detection: model_main.py only evaluates one image

旧城冷巷雨未停 提交于 2019-12-03 03:19:04
I've updated to Tensorflow 1.9 & the latest master of the Object Detection API. When running a training/evaluation session that worked fine previously (I think version 1.6), the training appears to proceed as expected, but I only get evaluation & metrics for one image (the first). In Tensorboard the image is labeled 'Detections_Left_Groundtruth_Right'. The evaluation step itself also happens extremely quickly, which leads me to believe this isn't just a Tensorboard issue. Looking in model_lib.py, I see some suspicious code (near line 349): eval_images = ( features[fields.InputDataFields

Get the bounding box coordinates in the TensorFlow object detection API tutorial

旧时模样 提交于 2019-12-02 23:28:46
I am new to both python and Tensorflow. I am trying to run the object_detection_tutorial file from the Tensorflow Object Detection API , but I cannot find where I can get the coordinates of the bounding boxes when objects are detected. Relevant code: # The following processing is only for single image detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0]) detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0]) ... The place where I assume bounding boxes are drawn is like this: # Visualization of the results of a detection. vis_util.visualize_boxes_and_labels_on_image_array

Output score , class and id Extraction using TensorFlow object detection

删除回忆录丶 提交于 2019-12-02 09:55:11
How can I extract the output scores for objects , object class ,object id detected in images , generated by the Tensorflow Model for Object Detection ? I want to store all these details into individual variables so that later they can be stored in a database . Using the same code as found in this link https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb Please Help me out with the solution to this problem . I've Tried print(str(output_dict['detection_classes'][0] ) , ":" , str(output_dict['detection_scores'][0])) This works and gives the

Tensorflow' pb and pbtxt files don't work with OpenCV after retraining MobileNet SSD V1 COCO

柔情痞子 提交于 2019-12-02 07:13:48
I have followed this tutorial to retrain MobileNet SSD V1 using Tensorflow GPU as described and got 0.5 loss after training using GPU (below more info about config) and got model.ckpt . This is the command I used for Training: python ../models/research/object_detection/legacy/train.py --logtostderr --train_dir=./data/ --pipeline_config_path=./ssd_mobilenet_v1_pets.config And this is the command for freezing (generate pb file): python ../models/research/object_detection/export_inference_graph.py --input_type image_tensor --pipeline_config_path ./ssd_mobilenet_v1_pets.config --trained_checkpoint

Return coordinates that passes threshold value for bounding boxes Google's Object Detection API

北城余情 提交于 2019-12-02 07:02:48
Does anyone know how to get bounding box coordinates which only passes threshold value? I found this answer (here's a link ), so I tried using it and done the following: vis_util.visualize_boxes_and_labels_on_image_array( image, np.squeeze(boxes), np.squeeze(classes).astype(np.int32), np.squeeze(scores), category_index, use_normalized_coordinates=True, line_thickness=1, min_score_thresh=0.80) for i,b in enumerate(boxes[0]): ymin = boxes[0][i][0]*height xmin = boxes[0][i][1]*width ymax = boxes[0][i][2]*height xmax = boxes[0][i][3]*width print ("Top left") print (xmin,ymin,) print ("Bottom right

TensorFlow object detection api: classification weights initialization when changing number of classes at training using pre-trained models

拜拜、爱过 提交于 2019-12-01 12:36:51
I want to utilize not only the feature-extractor pre-trained weights but also the feature-map layers' classifier/localization pre-trained weights for fine-tuning tensorflow object detection models (SSD) using tensorflow object detection API. When my new model has a different number of classes from the pre-trained model that I'm using for the fine-tuning checkpoint, how would the TensorFlow object detection API handle the classification weight tensors? When fine-tuning pre-trained models in ML object detection models like SSD, I can initialize not only the feature-extractor weights with the pre

Tensorflow object detection api SSD model using 'keep_aspect_ratio_resizer'

心已入冬 提交于 2019-12-01 11:05:50
I am trying to detect objects in different shaped images (not square). I used faster_rcnn_inception_v2 model and there I can use image resizer which maintains the aspect ratio of the image and the output is satisfactory. image_resizer { keep_aspect_ratio_resizer { min_dimension: 100 max_dimension: 600 } } Now for faster performance, I want to train it using ssd_inception_v2 or ssd_inception_v2 model. The sample configuration uses fixed shape resize as below, image_resizer { fixed_shape_resizer { height: 300 width: 300 } } But the problem is I get a very poor detection result because of that

TensorFlow object detection api: classification weights initialization when changing number of classes at training using pre-trained models

ε祈祈猫儿з 提交于 2019-12-01 10:53:34
问题 I want to utilize not only the feature-extractor pre-trained weights but also the feature-map layers' classifier/localization pre-trained weights for fine-tuning tensorflow object detection models (SSD) using tensorflow object detection API. When my new model has a different number of classes from the pre-trained model that I'm using for the fine-tuning checkpoint, how would the TensorFlow object detection API handle the classification weight tensors? When fine-tuning pre-trained models in ML

Return coordinates for bounding boxes Google's Object Detection API

倾然丶 夕夏残阳落幕 提交于 2019-12-01 09:34:31
How can i get the coordinates of the produced bounding boxes using the inference script of Google's Object Detection API? I know that printing boxes[0][i] returns the predictions of the ith detection in an image but what exactly is the meaning of these returned numbers? Is there a way that i can get xmin,ymin,xmax,ymax? Thanks in advance. The boxes array that you mention contains this information and the format is a [N, 4] array where each row is of the format: [ymin, xmin, ymax, xmax] in normalized coordinates relative to the size of the input image. Google Object Detection API returns

Tensorflow object detection api SSD model using 'keep_aspect_ratio_resizer'

微笑、不失礼 提交于 2019-12-01 07:24:45
问题 I am trying to detect objects in different shaped images (not square). I used faster_rcnn_inception_v2 model and there I can use image resizer which maintains the aspect ratio of the image and the output is satisfactory. image_resizer { keep_aspect_ratio_resizer { min_dimension: 100 max_dimension: 600 } } Now for faster performance, I want to train it using ssd_inception_v2 or ssd_inception_v2 model. The sample configuration uses fixed shape resize as below, image_resizer { fixed_shape