In reference to object_detection_tutorial.ipynb. I am wondering if its possible to run for all the images in a directory.
Rather than writing a for loop and running
It is possible to run inference on batch of images depending on computational power of GPU and size of the images.
step 1: stacking all the test images in one array:
for image_path in glob.glob(PATH_TO_TEST_IMAGES_DIR + '/*.jpg'):
image_np = io.imread(image_path) #
image_array.append(image_np)
image_array = np.array(image_array)
step 2: run inference on batches: (higher batch size might cause out of memory issues)
BATCH_SIZE = 5
for i in range(0, image_array.shape[0],BATCH_SIZE):
output_dict = sess.run(tensor_dict, feed_dict={image_tensor: image_array[i:i+BATCH_SIZE]})
print("number of images inferenced = ", i+BATCH_SIZE)
output_dict_array.append(output_dict)
make sure dimensions of image_tensor and image_array match. In this example image_array is (?, height, width, 3)
some tips: