TensorRT FP16 gives slower inference performance than native Tensorflow Slim savedmodel

后端 未结 0 1907
感动是毒
感动是毒 2021-02-14 01:17

I have tried using Estimator on Tensorflow 1.14 to convert a Tensorflow Slim model (resnet_v2_50) to SavedModel format, then used TensorRT to quantize the SavedModel to FP16. Ho

相关标签:
回答
  • 消灭零回复
提交回复
热议问题