Error converting TF model for Jetson Nano using tf.trt

流过昼夜 提交于 2020-01-06 05:34:08

问题


I am trying to convert a TF 1.14.0 saved_model to tensorRT on the Jetson Nano. I have saved my model via tf.saved_model.save and am trying to convert it on the Nano. However, I get the following error:

Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/importer.py", line 427, in import_graph_def
    graph._c_graph, serialized, options)  # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 1 of node StatefulPartitionedCall was passed float from acoustic_cnn/conv2d_seq_layer/conv3d/kernel:0 incompatible with expected resource.

I have seen this issue discussed on the web, but no solution works for me. I tried:

  1. setting tf.keras.backend.set_learning_phase(0) (source)

  2. Using is_dynamic_op=True, precision_mode='FP32' (source) And still get the error.

  3. Also, I am using TF Eager so I dont see how I would modify the graphdef as suggested here

Let me know what else you think I should do?

For reference, below is the code I use for conversion and here is the link to my saved_model

Conversion code

import numpy as np
import tensorflow as tf
from ipdb import set_trace
from tensorflow.python.compiler.tensorrt import trt_convert as trt

INPUT_SAVED_MODEL_DIR = 'tst'
OUTPUT_SAVED_MODEL_DIR = 'tst_out'

tf.enable_eager_execution()

def load_run_savedmodel():
    mod = tf.saved_model.load_v2('tst')
    inp = tf.convert_to_tensor(np.ones((32, 18, 63, 8)), dtype=tf.float32)
    out = mod(inp)

def convert_savedmodel():

    tf.keras.backend.set_learning_phase(0)

    params = trt.DEFAULT_TRT_CONVERSION_PARAMS._replace(
        # precision_mode='FP16',
        # is_dynamic_op=True
    )

    converter = trt.TrtGraphConverter(input_saved_model_dir=INPUT_SAVED_MODEL_DIR,
                                      is_dynamic_op=True,
                                      precision_mode='FP32'
                                      )

    converter.convert()
    converter.save(OUTPUT_SAVED_MODEL_DIR)

    load_infer_savedmodel()

    return None

def load_infer_savedmodel():
    with tf.Session() as sess:
        # First load the SavedModel into the session
        tf.saved_model.loader.load(
            sess, [tf.saved_model.tag_constants.SERVING], output_saved_model_dir)
        set_trace()
        output = sess.run([output_tensor], feed_dict={input_tensor: input_data})


if __name__ == '__main__':
    convert_savedmodel()
    # load_infer_savedmodel()

来源:https://stackoverflow.com/questions/58940893/error-converting-tf-model-for-jetson-nano-using-tf-trt

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!