Edge TPU Compiler: ERROR: quantized_dimension must be in range [0, 1). Was 3

大憨熊 提交于 2019-12-03 06:19:10

问题


I'm trying to get a Mobilenetv2 model (retrained last layers to my data) to run on the Google edge TPU Coral.

I've followed this tuturial https://www.tensorflow.org/lite/performance/post_training_quantization?hl=en to do the post-training quantization. The relevant code is:

...
train = tf.convert_to_tensor(np.array(train, dtype='float32'))
my_ds = tf.data.Dataset.from_tensor_slices(train).batch(1)


# POST TRAINING QUANTIZATION
def representative_dataset_gen():
    for input_value in my_ds.take(30):
        yield [input_value]

converter = tf.lite.TFLiteConverter.from_keras_model_file(saved_model_dir)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.representative_dataset = representative_dataset_gen
converter.target_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
tflite_quant_model = converter.convert()

I've successfully generated the tflite quantized model but when I run the edgetpu_compiler (followed this page https://coral.withgoogle.com/docs/edgetpu/compiler/#usage) I get this output:

edgetpu_compiler  Notebooks/MobileNetv2_3class_visit_split_best-val- 
acc.h5.quant.tflite

Edge TPU Compiler version 2.0.258810407
INFO: Initialized TensorFlow Lite runtime.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
ERROR: quantized_dimension must be in range [0, 1). Was 3.
Invalid model: Notebooks/MobileNetv2_3class_visit_split_best-val-        
acc.h5.quant.tflite
Model could not be parsed

The input shape of the model is a 3 channel RGB image. Is possible to do full integer quantization on 3 channel images? I couldn't find anything saying that you can't either on TensorFlow and Google Coral documentation.


回答1:


I had similar errors, doing the post training full integer quantization with tf-nightly build 1.15 and the use that .tflite file, compile with edge TPU compiler it should work. my error was solved with this approach.

Same issue was raised in github, you can see it - here




回答2:


I have the same problem and the same error message. I retrained MobilenetV2 using tensorflow.keras.applications mobilenetv2. I found that there are some big differences in the TFLite tensors between my model and the Coral's example model(https://coral.withgoogle.com/models/).

First, types of input and output are different. When I convert my tf.keras model to tflite, it contains float type input and output tensors while the example model has an integer type. This is different if I use a command-line conversion and python conversion from tensorflow-lite (https://www.tensorflow.org/lite/convert/). The command-line conversion outputs the integer type io, but python conversion outputs the float type io. (This is really strange.)

Second, there is no Batch normalization(BN) layer in the example model however there are some BNs in Keras MobilenetV2. I think the number of 'ERROR: quantized_dimension must be in range [0, 1). Was 3.' is related to the number of BN because there are 17 BN layers in Keras model.

I'm still struggling with this problem. I'm just going to follow the Coral's retraining example to solve it. (https://coral.withgoogle.com/docs/edgetpu/retrain-detection/)




回答3:


This problem is fixed in tensorflow1.15-rc. Convert your model to TFLite in the new tf version. Then the TFLite model will work in TPU compiler.

And put these lines which make the TFlite model's input and output as an uint8 type. (I think it should be tf.int8 though.)

converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS_INT8]
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.uint8

Check the link below. https://www.tensorflow.org/lite/performance/post_training_quantization




回答4:


Do you still have this issue after updating to the newest compiler version?

Edge TPU Compiler version 2.0.267685300


来源:https://stackoverflow.com/questions/57234308/edge-tpu-compiler-error-quantized-dimension-must-be-in-range-0-1-was-3

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!