Problem converting tensorflow saved_model from float32 to float16 using TensorRT (TF-TRT)
问题 I have a tensorflow (version 1.14) float32 SavedModel that I want to convert to float16. According to https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html#usage-example , I could pass "FP16" to precision_mode to convert the model to fp16. But the converted model, after checking the tensorboard, is still fp32: net paramters are DT_FLOAT instead of DT_HALF. And the size of the converted model is similar to the model before conversion. (Here I assume that, if converted