Run Tensorflow with NVIDIA TensorRT Inference Engine

前端 未结 2 1372
旧巷少年郎
旧巷少年郎 2021-02-08 22:11

I would like to use NVIDIA TensorRT to run my Tensorflow models. Currenly, TensorRT supports Caffe prototxt network descriptor files.

I was not able to find source code

相关标签:
2条回答
  • 2021-02-08 22:51

    TensorRT 3.0 supports import/conversion of TensorFlow graphs via it's UFF (universal framework format). Some layer implementations are missing and will require custom implementations via IPlugin interface.

    Previous versions didn't support native import of TensorFlow models/checkpoints.

    What you can also do is export the layers/network description into your own intermediate format (such as text file) and then use TensorRT C++ API to construct the graph for inference. You'd have to export the convolution weights/biases separately. Make sure to pay attention to weight format - TensorFlow uses NHWC while TensorRT uses NCHW. And for the weights, TF uses RSCK ([filter_height, filter_width, input_depth, output_depth]) and TensorRT uses KCRS.

    See this paper for an extended discussion of tensor formats: https://arxiv.org/abs/1410.0759

    Also this link has useful relevant info: https://www.tensorflow.org/versions/master/extend/tool_developers/

    0 讨论(0)
  • 2021-02-08 22:59

    No workarounds are currently needed as the new TensorRT 3 added support for TensorFlow.

    0 讨论(0)
提交回复
热议问题