C++ Tensorflow API with TensorRT

点点圈 提交于 2019-12-03 21:25:59

Another way to solve the problem with the error "Not found: Op type not registered 'TRTEngineOp'" on Tensorflow 1.8:

1) In the file tensorflow/contrib/tensorrt/BUILD, add new section with following content :

cc_library(
name = "trt_engine_op_kernel_cc",
srcs = [
    "kernels/trt_calib_op.cc",
    "kernels/trt_engine_op.cc",
    "ops/trt_calib_op.cc",
    "ops/trt_engine_op.cc",
    "shape_fn/trt_shfn.cc",
],
hdrs = [
    "kernels/trt_calib_op.h",
    "kernels/trt_engine_op.h",
    "shape_fn/trt_shfn.h",
],
copts = tf_copts(),
visibility = ["//visibility:public"],
deps = [
    ":trt_logging",
    ":trt_plugins",
    ":trt_resources",
    "//tensorflow/core:gpu_headers_lib",
    "//tensorflow/core:lib_proto_parsing",
    "//tensorflow/core:stream_executor_headers_lib",
] + if_tensorrt([
    "@local_config_tensorrt//:nv_infer",
]) + tf_custom_op_library_additional_deps(),
alwayslink = 1,  # buildozer: disable=alwayslink-with-hdrs
)

2) Add //tensorflow/contrib/tensorrt:trt_engine_op_kernel_cc as dependency to the corresponding BAZEL project you want to build

PS: No need to load library _trt_engine_op.so with TF_LoadLibrary

Here are my findings (and some kind of solution) for this problem (Tensorflow 1.8.0, TensorRT 3.0.4):

I wanted to include the tensorrt support into a library, which loads a graph from a given *.pb file.

Just adding //tensorflow/contrib/tensorrt:trt_engine_op_kernel to my Bazel BUILD file didn't do the trick for me. I still got a message indicating that the Ops where not registered:

2018-05-21 12:22:07.286665: E tensorflow/core/framework/op_kernel.cc:1242] OpKernel ('op: "TRTCalibOp" device_type: "GPU"') for unknown op: TRTCalibOp
2018-05-21 12:22:07.286856: E tensorflow/core/framework/op_kernel.cc:1242] OpKernel ('op: "TRTEngineOp" device_type: "GPU"') for unknown op: TRTEngineOp
2018-05-21 12:22:07.296024: E tensorflow/examples/tf_inference_lib/cTfInference.cpp:56] Not found: Op type not registered 'TRTEngineOp' in binary running on ***. 
Make sure the Op and Kernel are registered in the binary running in this process.

The solution was, that I had to load the Ops library (tf_custom_op_library) within my C++ Code using the C_API:

#include "tensorflow/c/c_api.h"
...
TF_Status status = TF_NewStatus();
TF_LoadLibrary("_trt_engine_op.so", status);

The shared object _trt_engine_op.so is created for the bazel target //tensorflow/contrib/tensorrt:python/ops/_trt_engine_op.so:

bazel build --config=opt --config=cuda --config=monolithic \
     //tensorflow/contrib/tensorrt:python/ops/_trt_engine_op.so

Now I only have to make sure, that _trt_engine_op.so is available whenever it is needed, e.g. by LD_LIBRARY_PATH.

If anybody has an idea, how to do this in a more elegant way (why do we have 2 artefacts which have to be build? Can't we just have one?), I'm happy for every suggestion.

tldr

  1. add //tensorflow/contrib/tensorrt:trt_engine_op_kernel as dependency to the corresponding BAZEL project you want to build

  2. Load the ops-library _trt_engine_op.so in your code using the C-API.

For Tensorflow r1.8, the additions shown below in two BUILD files and building libtensorflow_cc.so with the monolithic option worked for me.

diff --git a/tensorflow/BUILD b/tensorflow/BUILD
index cfafffd..fb8eb31 100644
--- a/tensorflow/BUILD
+++ b/tensorflow/BUILD
@@ -525,6 +525,8 @@ tf_cc_shared_object(
         "//tensorflow/cc:scope",
         "//tensorflow/cc/profiler",
         "//tensorflow/core:tensorflow",
+        "//tensorflow/contrib/tensorrt:trt_conversion",
+        "//tensorflow/contrib/tensorrt:trt_engine_op_kernel",
     ],
 )

diff --git a/tensorflow/contrib/tensorrt/BUILD b/tensorflow/contrib/tensorrt/BUILD
index fd3582e..a6566b9 100644
--- a/tensorflow/contrib/tensorrt/BUILD
+++ b/tensorflow/contrib/tensorrt/BUILD
@@ -76,6 +76,8 @@ cc_library(
     srcs = [
         "kernels/trt_calib_op.cc",
         "kernels/trt_engine_op.cc",
+        "ops/trt_calib_op.cc",
+        "ops/trt_engine_op.cc",
     ],
     hdrs = [
         "kernels/trt_calib_op.h",
@@ -86,6 +88,7 @@ cc_library(
     deps = [
         ":trt_logging",
         ":trt_resources",
+        ":trt_shape_function",
         "//tensorflow/core:gpu_headers_lib",
         "//tensorflow/core:lib_proto_parsing",
         "//tensorflow/core:stream_executor_headers_lib",

As you mentioned, it should work when you add //tensorflow/contrib/tensorrt:trt_engine_op_kernel to the dependency list. Currently the Tensorflow-TensorRT integration is still in progress and may work well only for the python API; for C++ you'll need to call ConvertGraphDefToTensorRT() from tensorflow/contrib/tensorrt/convert/convert_graph.h for the conversion.

Let me know if you have any questions.

Solution: add import

from tensorflow.python.compiler.tensorrt import trt_convert as trt

link discuss: https://github.com/tensorflow/tensorflow/issues/26525

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!