How to convert model trained on custom data-set for the Edge TPU board?

て烟熏妆下的殇ゞ 提交于 2020-06-17 15:20:24

问题


I have trained my custom data-set using the Tensor Flow Object Detection API. I run my "prediction" script and it works fine on the GPU. Now , I want to convert the model to lite and run it on the Google Coral Edge TPU Board to detect my custom objects. I have gone through the documentation that Google Coral Board Website provides but I found it very confusing. How to convert and run it on the Google Coral Edge TPU Board? Thanks


回答1:


Without reading the documentation, it will be very hard to continue. I'm not sure what your "prediction script" means, but I'm assuming that the script loaded a .pb tensorflow model, loaded some image data, and run inference on it to produce prediction results. That means you have a .pb tensorflow model at the "Frozen graph" stage of the following pipeline:

Image taken from coral.ai.

The next step would be to convert your .pb model to a "fully quantized .tflite model" using the post training quantization technique. The documentation to do that are given here. I also created a github gist, containing an example of Post Training Quantization here. Once you have produced the .tflite model, you'll need to compile the model via the edgetpu_compiler. Although everything you need to know about the edgetpu compiler is in that link, for your purpose, compiling a model is as simple as:

$ edgetpu_compiler your_model_name.tflite

Which will creates a your_model_name_edgetpu.tflite model that is compatible with the EdgeTPU. Now, if at this stage, instead of creating an edgetpu compatible model, you are getting some type of errors, then that means your model did not meets the requirements that are posted in the models-requirements section.

Once you have produced a compiled model, you can then deploy it on an edgetpu device. Currently are 2 main APIs that can be use to run inference with the model:

  • EdgeTPU API
    • python api
    • C++ api
  • tflite API
    • C++ api
    • python api

Ultimately, there are many demo examples to run inference on the model here.




回答2:


The previous answer works with general classification models, but not with TF object detection API trained models.

You cannot do post-training quantization with TF Lite converter on TF object detection API models.

In order to run object detection models on EdgeTPU-s:

  1. You must train the models in quantized aware training mode with this addition in model config:

graph_rewriter { quantization { delay: 48000 weight_bits: 8 activation_bits: 8 } }

This might not work with all the models provided in the model-zoo, try a quantized model first.

  1. After training, export the frozen graph with: object_detection/export_tflite_ssd_graph.py

  2. Run tensorflow/lite/toco tool on the frozen graph to make it TFLite compatible

  3. And finally run edgetpu_complier on the .tflite file

You can find more in-depth guide here: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tensorflowlite.md



来源:https://stackoverflow.com/questions/59847070/how-to-convert-model-trained-on-custom-data-set-for-the-edge-tpu-board

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!