nvidia-jetson

Load and run test a .trt model

一世执手 提交于 2020-12-06 18:54:08
问题 I need to run my model in NVIDIA JETSON T2, So I converted my working yoloV3 model into tensorRT(.trt format)( https://towardsdatascience.com/have-you-optimized-your-deep-learning-model-before-deployment-cdc3aa7f413d )This link mentioned helped me to convert the Yolo model into .trt .But after converting the model to .trt model I needed to test if it works fine (i.e) If the detection is good enough. I couldn't find any sample code for loading and testing .trt model. If anybody can help me ,

Load and run test a .trt model

删除回忆录丶 提交于 2020-12-06 18:54:05
问题 I need to run my model in NVIDIA JETSON T2, So I converted my working yoloV3 model into tensorRT(.trt format)( https://towardsdatascience.com/have-you-optimized-your-deep-learning-model-before-deployment-cdc3aa7f413d )This link mentioned helped me to convert the Yolo model into .trt .But after converting the model to .trt model I needed to test if it works fine (i.e) If the detection is good enough. I couldn't find any sample code for loading and testing .trt model. If anybody can help me ,

Error converting TF model for Jetson Nano using tf.trt

流过昼夜 提交于 2020-01-06 05:34:08
问题 I am trying to convert a TF 1.14.0 saved_model to tensorRT on the Jetson Nano. I have saved my model via tf.saved_model.save and am trying to convert it on the Nano. However, I get the following error: Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/importer.py", line 427, in import_graph_def graph._c_graph, serialized, options) # pylint: disable=protected-access tensorflow.python.framework.errors_impl.InvalidArgumentError: Input 1

Keras ValueError: Unknown layer:name, when trying to load model to another platform

此生再无相见时 提交于 2019-12-13 01:16:55
问题 I have trained a convolutional neural network using Keras 2.2.4 on Nvidia Quadro board. I have saved the trained model in tow separate files: one file (model.json) that describes the architecture and another file (model.h5) that has all the weights. I want to load the saved model on the Nvidia Jetson TX2 board that runs Keras 2.2.2 and I'm trying to do it as follows: # load json and create model json_file = open(prefix+'final_model.json', 'r') loaded_model_json = json_file.read() json_file

No ethernet access on jetson nano with custom yocto image

回眸只為那壹抹淺笑 提交于 2019-12-11 17:08:36
问题 I've created a very minimal image for the jetson nano with the recepe: inherit core-image inherit distro_features_check REQUIRED_DISTRO_FEATURES = "x11" IMAGE_FEATURES += "package-management splash" CORE_OS = "packagegroup-core-boot \ packagegroup-core-x11 \ packagegroup-xfce-base \ kernel-modules \ " WIFI_SUPPORT = " \ ifupdown \ dropbear\ crda \ iw \ " DEV_SDK_INSTALL = " \ opencv \ opencv-samples \ gstreamer1.0-omx-tegra \ python-numpy \ binutils \ binutils-symlinks \ coreutils \ cpp \ cpp

u-boot script to allow choosing between which rootfs part to boot (RAUC)

青春壹個敷衍的年華 提交于 2019-12-11 14:30:16
问题 I've managed to create an image with two rootfs partitions to run on my jetson nano with yocto/poky. I've followed the meta-rauc layer README and rauc user manual, to create the system.conf file and rauc_%.bbappend file and I am able to create bundles successfully. As I understand, I need some sort of u-boot script: In order to enable RAUC to switch the correct slot, its system configuration must specify the name of the respective slot from the bootloader’s perspective. You also have to set