tensorrt
tensorrt copied to clipboard
Segmentation fault on graph optimization
Hello all,
I have some troubles optimizing an object detector (SSD) with custom bounding box decoder. It works well when running it with TF2 but I get a SEGFAULT when trying to pass it through TF-TRT. Because the code and the saved_model are proprietary, they can't be given there. However, the code is similar with this exemple : https://docs.nvidia.com/deeplearning/frameworks/tf-trt-user-guide/index.html I hope someone will be able to decode that logs to find what's wrong in there. Thanks.
Environement container : nvcr.io/nvidia/tensorflow:20.03-tf2-py3 NVIDIA GV100-32GB
Output log_detector_trt.txt
Hi, it looks like the same issue as #181. That one is fixed in TF master, please give it a try.
Ok, I will check that with the latest tf release! Thanks a lot
Hi, did you get any luck? This is literally the same issue that I'm having. If it worked can you provide the environment details?
Thanks and much appreciated
[EDIT]
when will the nvcr.io/nvidia/tensorflow:20.05-tf2-py3
be available for us to test with this latest change?
@tfeher I also have another question from your stack-overflow response: Why does tf-trt create individual engines for each operation? Conversion from uff/onnx creates one engine file for the entire network. Can we load multiple serialized engine files the same way using the C++ TensorRT API?