Input images with dynamic dimensions in Tensorflow-lite

后端 未结 1 1640
暖寄归人
暖寄归人 2021-01-04 20:07

I have a tensorflow model that takes input images of varying size:

inputs = layers.Input(shape=(128,None,1), name=\'x_input\')



        
相关标签:
1条回答
  • 2021-01-04 20:42

    Yes, you can use dynamic tensors in TF-Lite. The reason why you can't directly set the shape to [None, 128, None, 1] is because this way, you can easily support more languages in the future. Furthermore, it makes the best use of static memory allocation scheme. This is a smart design choice for a framework that is intended to be used on small devices with low computation power. Here are the steps on how to dynamically set the tensor's size:

    0. Freezing

    It seems like you're converting from a frozen GraphDef, i.e. a *.pb file. Suppose your frozen model has input shape [None, 128, None, 1].

    1. Conversion step.

    During this step, set the input size to any valid one that can be accepted by your model. For example:

    tflite_convert \
      --graph_def_file='model.pb' \
      --output_file='model.tflite' \
      --input_shapes=1,128,80,1 \     # <-- here, you set an
                                      #     arbitrary valid shape
      --input_arrays='input' \         
      --output_arrays='Softmax'
    

    2. Inference step

    The trick is to use the function interpreter::resize_tensor_input(...) of the TF-Lite API in real time during inference. I will provide a python implementation of it. The Java and C++ implementation should be the same (as they have similar API):

    from tensorflow.contrib.lite.python import interpreter
    
    # Load the *.tflite model and get input details
    model = Interpreter(model_path='model.tflite')
    input_details = model.get_input_details()
    
    # Your network currently has an input shape (1, 128, 80 , 1),
    # but suppose you need the input size to be (2, 128, 200, 1).
    model.resize_tensor_input(
        input_details[0]['index'], (2, 128, 200, 1))
    model.allocate_tensors()
    

    That's it. You can now use that model for images with shape (2, 128, 200, 1), as long as your network architecture allows such an input shape. Beware that you will have to do model.allocate_tensors() every time you do such a reshape, so it will be very inefficient. It is strongly recommended to avoid using this function too much in your program.

    0 讨论(0)
提交回复
热议问题