问题
Intro: I'm new to machine learning and me and a colleague have to implement an algorithm for detecting traffic lights. I downloaded a pre trained model (faster rcnn) and ran several training steps (~10000). Now when using the object detection algorithm from the tensorflow git repository several traffic lights in one area are detected.
I did a little research and found the function "tf.image.non_max_suppression" but I cannot get it to work as intended (to be honest, I cannot even get it to run).
I assume you know the tf object detection sample code so you also know that all boxes are returned using a dictionary (output_dict).
To "clean" the boxes I use :
selected_indices = tf.image.non_max_suppression(
boxes = output_dict['detection_boxes'],
scores = output_dict['detection_scores'],
max_output_size = 1,
iou_threshold = 0.5,
score_threshold = float('-inf'),
name = None)
At first I thought I could use selected_indices as a new list of boxes so I tried this:
vis_util.visualize_boxes_and_labels_on_image_array(
image = image_np,
boxes = selected_indices,
classes = output_dict['detection_classes'],
scores = output_dict['detection_scores'],
category_index = category_index,
instance_masks = output_dict.get('detection_masks'),
use_normalized_coordinates = True)
but when I noticed this wont work I found a required method: "tf.gather()". Then I ran the following code:
boxes = output_dict['detection_boxes']
selected_indices = tf.image.non_max_suppression(
boxes = boxes,
scores = output_dict['detection_scores'],
max_output_size = 1,
iou_threshold = 0.5,
score_threshold = float('-inf'),
name = None)
selected_boxes = tf.gather(boxes, selected_indices)
vis_util.visualize_boxes_and_labels_on_image_array(
image = image_np,
boxes = selected_boxes,
classes = output_dict['detection_classes'],
scores = output_dict['detection_scores'],
category_index = category_index,
instance_masks = output_dict.get('detection_masks'),
use_normalized_coordinates = True)
but not even that one works. I receive an AttributeError ('Tensor' object has no attribute 'tolist') in visualization_utils.py on Line 689.
回答1:
So it looks like to get the boxes in the right format, you need to create a session and evaluate the tensor as follows:
suppressed = tf.image.non_max_suppression(output_dict['detection_boxes'], output_dict['detection_scores'], 5) # Replace 5 with max num desired boxes
sboxes = tf.gather(output_dict['detection_boxes'], suppressed)
sscores = tf.gather(output_dict['detection_scores'], suppressed)
sclasses = tf.gather(output_dict['detection_classes'], suppressed)
sess = tf.Session()
with sess.as_default():
boxes = sboxes.eval()
scores =sscores.eval()
classes = sclasses.eval()
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
boxes,
classes,
scores,
category_index,
instance_masks=output_dict.get('detection_masks'),
use_normalized_coordinates=True,
line_thickness=8)
来源:https://stackoverflow.com/questions/54538497/tensorflow-object-detection-avoid-overlapping-boxes