run_inference_for_single_image(image, graph) - Tensorflow, object detection

后端 未结 3 1547
有刺的猬
有刺的猬 2021-01-06 20:47

In reference to object_detection_tutorial.ipynb. I am wondering if its possible to run for all the images in a directory.

Rather than writing a for loop and running

相关标签:
3条回答
  • 2021-01-06 21:40

    As you know, 'run_inference_for_single_image' method create each time. If you wanna inference for multiple images, you should change code like,

    • Method Call

      images = []
      for f in files:
        if f.lower().endswith(('.png', '.jpg', '.jpeg')):
          image_path = files_dir + '/' + f
          image =  .... // Read image etc.
          images.append(image)
          output_dicts = run_inference_for_multiple_images(images, detection_graph)
      
    • run_inference_for_multiple_images

      def run_inference_for_multiple_images(images, grapg):
        with graph.as_default():
          with tf.Session() as sess:
            output_dicts = []
      
            for index, image in enumerate(images):
              ... same as inferencing for single image
      
               output_dicts.append(output_dict)
      
         return output_dicts
      

    This code will be performed without creating tf.session each time but once.

    0 讨论(0)
  • 2021-01-06 21:46

    I found this tutorial from google - creating-object-detection-application-tensorflow. After looking into its github page --> object_detection_app --> app.py we only need to run detect_objects(image_path) function every single time we want to detect an object.

    0 讨论(0)
  • 2021-01-06 21:47

    It is possible to run inference on batch of images depending on computational power of GPU and size of the images.

    step 1: stacking all the test images in one array:

    for image_path in glob.glob(PATH_TO_TEST_IMAGES_DIR + '/*.jpg'):
        image_np = io.imread(image_path)  #
        image_array.append(image_np)
    image_array = np.array(image_array)
    

    step 2: run inference on batches: (higher batch size might cause out of memory issues)

      BATCH_SIZE = 5
      for i in range(0, image_array.shape[0],BATCH_SIZE):
        output_dict = sess.run(tensor_dict, feed_dict={image_tensor: image_array[i:i+BATCH_SIZE]})
    
    
        print("number of images inferenced = ", i+BATCH_SIZE)
        output_dict_array.append(output_dict)
    

    make sure dimensions of image_tensor and image_array match. In this example image_array is (?, height, width, 3)

    some tips:

    1. You would want to load the graph only once as it takes few seconds to load.
    2. I observed that using skimage.io.imread() or cv2.imread() is pretty fast in loading images. These functions directly load images as numpy arrays.
    3. skimage or opencv for saving images are faster than matplotlib.
    0 讨论(0)
提交回复
热议问题