Edge TPU Compiler: ERROR: quantized_dimension must be in range [0, 1). Was 3

前端 未结 4 2012
野性不改
野性不改 2021-02-02 02:57

I\'m trying to get a Mobilenetv2 model (retrained last layers to my data) to run on the Google edge TPU Coral.

I\'ve followed this tuturial https://www.tensorflow.org/li

相关标签:
4条回答
  • 2021-02-02 03:42

    Do you still have this issue after updating to the newest compiler version?

    Edge TPU Compiler version 2.0.267685300
    
    0 讨论(0)
  • 2021-02-02 03:47

    I have the same problem and the same error message. I retrained MobilenetV2 using tensorflow.keras.applications mobilenetv2. I found that there are some big differences in the TFLite tensors between my model and the Coral's example model(https://coral.withgoogle.com/models/).

    First, types of input and output are different. When I convert my tf.keras model to tflite, it contains float type input and output tensors while the example model has an integer type. This is different if I use a command-line conversion and python conversion from tensorflow-lite (https://www.tensorflow.org/lite/convert/). The command-line conversion outputs the integer type io, but python conversion outputs the float type io. (This is really strange.)

    Second, there is no Batch normalization(BN) layer in the example model however there are some BNs in Keras MobilenetV2. I think the number of 'ERROR: quantized_dimension must be in range [0, 1). Was 3.' is related to the number of BN because there are 17 BN layers in Keras model.

    I'm still struggling with this problem. I'm just going to follow the Coral's retraining example to solve it. (https://coral.withgoogle.com/docs/edgetpu/retrain-detection/)

    0 讨论(0)
  • 2021-02-02 03:48

    This problem is fixed in tensorflow1.15-rc. Convert your model to TFLite in the new tf version. Then the TFLite model will work in TPU compiler.

    And put these lines which make the TFlite model's input and output as an uint8 type. (I think it should be tf.int8 though.)

    converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
    converter.inference_input_type = tf.uint8
    converter.inference_output_type = tf.uint8
    

    Check the link below. https://www.tensorflow.org/lite/performance/post_training_quantization

    0 讨论(0)
  • 2021-02-02 03:53

    I had similar errors, doing the post training full integer quantization with tf-nightly build 1.15 and the use that .tflite file, compile with edge TPU compiler it should work. my error was solved with this approach.

    Same issue was raised in github, you can see it - here

    0 讨论(0)
提交回复
热议问题