问题
I would like to use NVIDIA TensorRT to run my Tensorflow models. Currenly, TensorRT supports Caffe prototxt network descriptor files.
I was not able to find source code to convert Tensorflow models to Caffe models. Are there any workarounds?
回答1:
TensorRT 3.0 supports import/conversion of TensorFlow graphs via it's UFF (universal framework format). Some layer implementations are missing and will require custom implementations via IPlugin interface.
Previous versions didn't support native import of TensorFlow models/checkpoints.
What you can also do is export the layers/network description into your own intermediate format (such as text file) and then use TensorRT C++ API to construct the graph for inference. You'd have to export the convolution weights/biases separately. Make sure to pay attention to weight format - TensorFlow uses NHWC while TensorRT uses NCHW. And for the weights, TF uses RSCK ([filter_height, filter_width, input_depth, output_depth]) and TensorRT uses KCRS.
See this paper for an extended discussion of tensor formats: https://arxiv.org/abs/1410.0759
Also this link has useful relevant info: https://www.tensorflow.org/versions/master/extend/tool_developers/
回答2:
No workarounds are currently needed as the new TensorRT 3 added support for TensorFlow.
来源:https://stackoverflow.com/questions/41142284/run-tensorflow-with-nvidia-tensorrt-inference-engine