TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

Segmentation fault TensorRT 10.3 (and older versions) with GCC13

Open jokla opened this issue 4 months ago • 3 comments

Description

Tensorrt seg fault when parsing an ONNX model ( yolov8 QAT) with gcc13 installed in Ubuntu 22.04.

Environment

TensorRT Version: 10.3 or olders

NVIDIA GPU: NVIDIA RTX A6000

NVIDIA Driver Version: 560.28.03

CUDA Version: 12.6

CUDNN Version: 8.9.6.50-1+cuda12.2

Operating System:

Container : ubuntu-22.04.Dockerfile + gcc 13 installed

# Install GCC 13
ARG GCC_VERSION=13
RUN add-apt-repository ppa:ubuntu-toolchain-r/test
RUN apt update && apt install g++-"$GCC_VERSION" gcc-"$GCC_VERSION" -y && apt clean
RUN update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-"$GCC_VERSION" "$GCC_VERSION" \
    --slave /usr/bin/g++ g++ /usr/bin/g++-"$GCC_VERSION" \
    --slave /usr/bin/gcov gcov /usr/bin/gcov-"$GCC_VERSION"

Same issue with nvcr.io/nvidia/tensorrt:24.08-py3 with gcc13 installed on top of it.

Relevant Files

Steps To Reproduce

Commands or scripts:

  • Follow TRT instructions to build and run container, install gcc13, built TRT from source
  • Convert ONNX model with ./trtexec--onnx=qat_model_yolov8.onnx --best
(gdb) run --onnx=../../data/yolov8_qat.onnx --best
Starting program: /workspace/TensorRT/build/out/trtexec_debug --onnx=../../data/yolov8_qat.onnx --best
warning: Error disabling address space randomization: Operation not permitted
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/usr/lib/x86_64-linux-gnu/libthread_db.so.1".
&&&& RUNNING TensorRT.trtexec [TensorRT v100300] # /workspace/TensorRT/build/out/trtexec_debug --onnx=../../data/yolov8_qat.onnx --best
[New Thread 0x75caf7600000 (LWP 12728)]
[09/30/2024-15:15:57] [I] === Model Options ===
[09/30/2024-15:15:57] [I] Format: ONNX
[09/30/2024-15:15:57] [I] Model: ../../data/yolov8_qat.onnx
[09/30/2024-15:15:57] [I] Output:
[09/30/2024-15:15:57] [I] === Build Options ===
[09/30/2024-15:15:57] [I] Memory Pools: workspace: default, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default, tacticSharedMem: default
[09/30/2024-15:15:57] [I] avgTiming: 8
[09/30/2024-15:15:57] [I] Precision: FP32+FP16+BF16+INT8
[09/30/2024-15:15:57] [I] LayerPrecisions: 
[09/30/2024-15:15:57] [I] Layer Device Types: 
[09/30/2024-15:15:57] [I] Calibration: Dynamic
[09/30/2024-15:15:57] [I] Refit: Disabled
[09/30/2024-15:15:57] [I] Strip weights: Disabled
[09/30/2024-15:15:57] [I] Version Compatible: Disabled
[09/30/2024-15:15:57] [I] ONNX Plugin InstanceNorm: Disabled
[09/30/2024-15:15:57] [I] TensorRT runtime: full
[09/30/2024-15:15:57] [I] Lean DLL Path: 
[09/30/2024-15:15:57] [I] Tempfile Controls: { in_memory: allow, temporary: allow }
[09/30/2024-15:15:57] [I] Exclude Lean Runtime: Disabled
[09/30/2024-15:15:57] [I] Sparsity: Disabled
[09/30/2024-15:15:57] [I] Safe mode: Disabled
[09/30/2024-15:15:57] [I] Build DLA standalone loadable: Disabled
[09/30/2024-15:15:57] [I] Allow GPU fallback for DLA: Disabled
[09/30/2024-15:15:57] [I] DirectIO mode: Disabled
[09/30/2024-15:15:57] [I] Restricted mode: Disabled
[09/30/2024-15:15:57] [I] Skip inference: Disabled
[09/30/2024-15:15:57] [I] Save engine: 
[09/30/2024-15:15:57] [I] Load engine: 
[09/30/2024-15:15:57] [I] Profiling verbosity: 0
[09/30/2024-15:15:57] [I] Tactic sources: Using default tactic sources
[09/30/2024-15:15:57] [I] timingCacheMode: local
[09/30/2024-15:15:57] [I] timingCacheFile: 
[09/30/2024-15:15:57] [I] Enable Compilation Cache: Enabled
[09/30/2024-15:15:57] [I] errorOnTimingCacheMiss: Disabled
[09/30/2024-15:15:57] [I] Preview Features: Use default preview flags.
[09/30/2024-15:15:57] [I] MaxAuxStreams: -1
[09/30/2024-15:15:57] [I] BuilderOptimizationLevel: -1
[09/30/2024-15:15:57] [I] Calibration Profile Index: 0
[09/30/2024-15:15:57] [I] Weight Streaming: Disabled
[09/30/2024-15:15:57] [I] Runtime Platform: Same As Build
[09/30/2024-15:15:57] [I] Debug Tensors: 
[09/30/2024-15:15:57] [I] Input(s)s format: fp32:CHW
[09/30/2024-15:15:57] [I] Output(s)s format: fp32:CHW
[09/30/2024-15:15:57] [I] Input build shapes: model
[09/30/2024-15:15:57] [I] Input calibration shapes: model
[09/30/2024-15:15:57] [I] === System Options ===
[09/30/2024-15:15:57] [I] Device: 0
[09/30/2024-15:15:57] [I] DLACore: 
[09/30/2024-15:15:57] [I] Plugins:
[09/30/2024-15:15:57] [I] setPluginsToSerialize:
[09/30/2024-15:15:57] [I] dynamicPlugins:
[09/30/2024-15:15:57] [I] ignoreParsedPluginLibs: 0
[09/30/2024-15:15:57] [I] 
[09/30/2024-15:15:57] [I] === Inference Options ===
[09/30/2024-15:15:57] [I] Batch: Explicit
[09/30/2024-15:15:57] [I] Input inference shapes: model
[09/30/2024-15:15:57] [I] Iterations: 10
[09/30/2024-15:15:57] [I] Duration: 3s (+ 200ms warm up)
[09/30/2024-15:15:57] [I] Sleep time: 0ms
[09/30/2024-15:15:57] [I] Idle time: 0ms
[09/30/2024-15:15:57] [I] Inference Streams: 1
[09/30/2024-15:15:57] [I] ExposeDMA: Disabled
[09/30/2024-15:15:57] [I] Data transfers: Enabled
[09/30/2024-15:15:57] [I] Spin-wait: Disabled
[09/30/2024-15:15:57] [I] Multithreading: Disabled
[09/30/2024-15:15:57] [I] CUDA Graph: Disabled
[09/30/2024-15:15:57] [I] Separate profiling: Disabled
[09/30/2024-15:15:57] [I] Time Deserialize: Disabled
[09/30/2024-15:15:57] [I] Time Refit: Disabled
[09/30/2024-15:15:57] [I] NVTX verbosity: 0
[09/30/2024-15:15:57] [I] Persistent Cache Ratio: 0
[09/30/2024-15:15:57] [I] Optimization Profile Index: 0
[09/30/2024-15:15:57] [I] Weight Streaming Budget: 100.000000%
[09/30/2024-15:15:57] [I] Inputs:
[09/30/2024-15:15:57] [I] Debug Tensor Save Destinations:
[09/30/2024-15:15:57] [I] === Reporting Options ===
[09/30/2024-15:15:57] [I] Verbose: Disabled
[09/30/2024-15:15:57] [I] Averages: 10 inferences
[09/30/2024-15:15:57] [I] Percentiles: 90,95,99
[09/30/2024-15:15:57] [I] Dump refittable layers:Disabled
[09/30/2024-15:15:57] [I] Dump output: Disabled
[09/30/2024-15:15:57] [I] Profile: Disabled
[09/30/2024-15:15:57] [I] Export timing to JSON file: 
[09/30/2024-15:15:57] [I] Export output to JSON file: 
[09/30/2024-15:15:57] [I] Export profile to JSON file: 
[09/30/2024-15:15:57] [I] 
[09/30/2024-15:15:57] [I] === Device Information ===
[09/30/2024-15:15:57] [I] Available Devices: 
[09/30/2024-15:15:57] [I]   Device 0: "NVIDIA RTX A6000" UUID: GPU-f046bca2-ca31-632c-bd28-bfde07884c2d
[New Thread 0x75caf5a00000 (LWP 12729)]
[New Thread 0x75caf5000000 (LWP 12730)]
[09/30/2024-15:15:57] [I] Selected Device: NVIDIA RTX A6000
[09/30/2024-15:15:57] [I] Selected Device ID: 0
[09/30/2024-15:15:57] [I] Selected Device UUID: GPU-f046bca2-ca31-632c-bd28-bfde07884c2d
[09/30/2024-15:15:57] [I] Compute Capability: 8.6
[09/30/2024-15:15:57] [I] SMs: 84
[09/30/2024-15:15:57] [I] Device Global Memory: 48567 MiB
[09/30/2024-15:15:57] [I] Shared Memory per SM: 100 KiB
[09/30/2024-15:15:57] [I] Memory Bus Width: 384 bits (ECC disabled)
[09/30/2024-15:15:57] [I] Application Compute Clock Rate: 1.8 GHz
[09/30/2024-15:15:57] [I] Application Memory Clock Rate: 8.001 GHz
[09/30/2024-15:15:57] [I] 
[09/30/2024-15:15:57] [I] Note: The application clock rates do not reflect the actual clock rates that the GPU is currently running at.
[09/30/2024-15:15:57] [I] 
[09/30/2024-15:15:57] [I] TensorRT version: 10.3.0
[09/30/2024-15:15:57] [I] Loading standard plugins
[09/30/2024-15:15:57] [I] [TRT] [MemUsageChange] Init CUDA: CPU +2, GPU +0, now: CPU 18, GPU 2913 (MiB)
[09/30/2024-15:15:59] [I] [TRT] [MemUsageChange] Init builder kernel library: CPU +2088, GPU +386, now: CPU 2261, GPU 3299 (MiB)
[09/30/2024-15:15:59] [I] Start parsing network model.
[09/30/2024-15:15:59] [I] [TRT] ----------------------------------------------------------------
[09/30/2024-15:15:59] [I] [TRT] Input filename:   ../../data/yolov8_qat.onnx
[09/30/2024-15:15:59] [I] [TRT] ONNX IR version:  0.0.8
[09/30/2024-15:15:59] [I] [TRT] Opset version:    17
[09/30/2024-15:15:59] [I] [TRT] Producer name:    pytorch
[09/30/2024-15:15:59] [I] [TRT] Producer version: 2.1.1
[09/30/2024-15:15:59] [I] [TRT] Domain:           
[09/30/2024-15:15:59] [I] [TRT] Model version:    0
[09/30/2024-15:15:59] [I] [TRT] Doc string:       
[09/30/2024-15:15:59] [I] [TRT] ----------------------------------------------------------------
[09/30/2024-15:16:00] [I] Finished parsing network model. Parse time: 0.66842
[09/30/2024-15:16:00] [W] [TRT] Calibrator won't be used in explicit quantization mode. Please insert Quantize/Dequantize layers to indicate which tensors to quantize/dequantize.
[09/30/2024-15:16:00] [W] [TRT] /Reshape_19: IShuffleLayer with zeroIsPlaceHolder=true has reshape dimension at position 0 that might or might not be zero. TensorRT resolves it at runtime, but this may cause excessive memory consumption and is usually a sign of a bug in the network.
[09/30/2024-15:16:00] [W] [TRT] /Reshape_22: IShuffleLayer with zeroIsPlaceHolder=true has reshape dimension at position 0 that might or might not be zero. TensorRT resolves it at runtime, but this may cause excessive memory consumption and is usually a sign of a bug in the network.
[09/30/2024-15:16:00] [I] [TRT] Local timing cache in use. Profiling results in this builder pass will not be stored.

Thread 1 "trtexec_debug" received signal SIGSEGV, Segmentation fault.
0x000075caf9d0116e in ?? () from /usr/lib/x86_64-linux-gnu/libgcc_s.so.1
(gdb) bt
#0  0x000075caf9d0116e in ?? () from /usr/lib/x86_64-linux-gnu/libgcc_s.so.1
#1  0x000075caf9d01d5a in _Unwind_Find_FDE () from /usr/lib/x86_64-linux-gnu/libgcc_s.so.1
#2  0x000075caf9cfd60a in ?? () from /usr/lib/x86_64-linux-gnu/libgcc_s.so.1
#3  0x000075caf9cff07d in _Unwind_RaiseException () from /usr/lib/x86_64-linux-gnu/libgcc_s.so.1
#4  0x000075caf9ea705b in __cxa_throw () from /usr/lib/x86_64-linux-gnu/libstdc++.so.6
#5  0x000075cab94349e6 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.10
#6  0x000075cab9f8cfcf in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.10
#7  0x000075cab9ac5937 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.10
#8  0x000075cab9aa5be4 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.10
#9  0x000075cab9aad8ac in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.10
#10 0x000075cab9aafab5 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.10
#11 0x000075cab99c5c8c in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.10
#12 0x000075cab99cb06a in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.10
#13 0x000075cab99cbab5 in ?? () from /usr/lib/x86_64-linux-gnu/libnvinfer.so.10
#14 0x00005ec2858921f9 in nvinfer1::IBuilder::buildSerializedNetwork (this=0x5ec291b9f3f0, network=..., config=...) at /workspace/TensorRT/include/NvInfer.h:9812
#15 0x00005ec28588bb84 in sample::networkToSerializedEngine (build=..., sys=..., builder=..., env=..., err=...) at /workspace/TensorRT/samples/common/sampleEngines.cpp:1209
#16 0x00005ec28588c545 in sample::modelToBuildEnv (model=..., build=..., sys=..., env=..., err=...) at /workspace/TensorRT/samples/common/sampleEngines.cpp:1293
#17 0x00005ec28588dd37 in sample::getEngineBuildEnv (model=..., build=..., sys=..., env=..., err=...) at /workspace/TensorRT/samples/common/sampleEngines.cpp:1477
#18 0x00005ec28594bcec in main (argc=3, argv=0x7fff310075f8) at /workspace/TensorRT/samples/trtexec/trtexec.cpp:327

It seems that the issue is coming from libnvinfer.so.10 and gcc13. The TRT open source version uses a prebuilt nvinfer (from https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/10.3.0/tars/TensorRT-10.3.0.26.Linux.x86_64-gnu.cuda-12.5.tar.gz) , possibly compiled with an older gcc (gcc 8 looking at this table ). The conversion is working on an Orin with Jetpack 6 ( probably because TRT is build with a newer gcc version).

How can I make TRT (and libnvinfer) compatible with gcc13? Also, is there a specific reason why it's only built with an old version of gcc?

Many thanks!

Have you tried the latest release?: Yes, same issue

Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt): Yes

jokla avatar Sep 30 '24 18:09 jokla