Convert Frozen graph for tfLite for Coral using tflite_convert

◇◆丶佛笑我妖孽 提交于 2020-03-20 14:36:52

问题


I'm using MobileNetV2 and trying to get it working for Google Coral. Everything seems to work except the Coral Web Compiler, throws a random error, Uncaught application failure. So I think the problem is the intemidary steps required. For example, I'm using this with tflite_convert

tflite_convert \
  --graph_def_file=optimized_graph.pb \
  --output_format=TFLITE \
  --output_file=mobilenet_v2_new.tflite \
  --inference_type=FLOAT \
  --inference_input_type=FLOAT \
  --input_arrays=input \
  --output_arrays=final_result \
  --input_shapes=1,224,224,3

What am I getting wrong?


回答1:


This is most likely because your model is not quantized. Edge TPU devices do not currently support float-based model inference. For the best results, you should enable quantization during training (described in the link). However, you can also apply quantization during TensorFlow Lite conversion.

With post-training quantization, you sacrifice accuracy but can test something out more quickly. When you convert your graph to TensorFlow Lite format, set inference_type to QUANTIZED_UINT8. You'll also need to apply the quantization parameters (mean/range/std_dev) on the command line as well.

tflite_convert \
  --graph_def_file=optimized_graph.pb \
  --output_format=TFLITE \
  --output_file=mobilenet_v2_new.tflite \
  --inference_type=QUANTIZED_UINT8 \
  --input_arrays=input \
  --output_arrays=final_result \
  --input_shapes=1,224,224,3 \
  --mean_values=128 --std_dev_values=127 \
  --default_ranges_min=0 --default_ranges_max=255

You can then pass the quantized .tflite file to the model compiler.

For more details on the Edge TPU model requirements, check out TensorFlow models on the Edge TPU.



来源:https://stackoverflow.com/questions/55320098/convert-frozen-graph-for-tflite-for-coral-using-tflite-convert

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!