TensorRT
TensorRT copied to clipboard
XXX failure of TensorRT X.Y when running XXX on GPU XXX
Description
The latest version of tensorrt creates a docker image. When running trtexec in docker to convert the onnx model into a trt model, a cuda error occurs. How to solve this?
&&&& RUNNING TensorRT.trtexec [TensorRT v100001] # trtexec --onnx=./weights/StandWater_Seg_efsam_240508.onnx --saveEngine=./weights/StandWater_Seg_efsam_240508.trt --fp16
[05/27/2024-07:46:47] [I] === Model Options ===
[05/27/2024-07:46:47] [I] Format: ONNX
[05/27/2024-07:46:47] [I] Model: ./weights/StandWater_Seg_efsam_240508.onnx
[05/27/2024-07:46:47] [I] Output:
[05/27/2024-07:46:47] [I] === Build Options ===
[05/27/2024-07:46:47] [I] Memory Pools: workspace: default, dlaSRAM: default, dlaLocalDRAM: default, dlaGlobalDRAM: default, tacticSharedMem: default
[05/27/2024-07:46:47] [I] avgTiming: 8
[05/27/2024-07:46:47] [I] Precision: FP32+FP16
[05/27/2024-07:46:47] [I] LayerPrecisions:
[05/27/2024-07:46:47] [I] Layer Device Types:
[05/27/2024-07:46:47] [I] Calibration:
[05/27/2024-07:46:47] [I] Refit: Disabled
[05/27/2024-07:46:47] [I] Strip weights: Disabled
[05/27/2024-07:46:47] [I] Version Compatible: Disabled
[05/27/2024-07:46:47] [I] ONNX Plugin InstanceNorm: Disabled
[05/27/2024-07:46:47] [I] TensorRT runtime: full
[05/27/2024-07:46:47] [I] Lean DLL Path:
[05/27/2024-07:46:47] [I] Tempfile Controls: { in_memory: allow, temporary: allow }
[05/27/2024-07:46:47] [I] Exclude Lean Runtime: Disabled
[05/27/2024-07:46:47] [I] Sparsity: Disabled
[05/27/2024-07:46:47] [I] Safe mode: Disabled
[05/27/2024-07:46:47] [I] Build DLA standalone loadable: Disabled
[05/27/2024-07:46:47] [I] Allow GPU fallback for DLA: Disabled
[05/27/2024-07:46:47] [I] DirectIO mode: Disabled
[05/27/2024-07:46:47] [I] Restricted mode: Disabled
[05/27/2024-07:46:47] [I] Skip inference: Disabled
[05/27/2024-07:46:47] [I] Save engine: ./weights/StandWater_Seg_efsam_240508.trt
[05/27/2024-07:46:47] [I] Load engine:
[05/27/2024-07:46:47] [I] Profiling verbosity: 0
[05/27/2024-07:46:47] [I] Tactic sources: Using default tactic sources
[05/27/2024-07:46:47] [I] timingCacheMode: local
[05/27/2024-07:46:47] [I] timingCacheFile:
[05/27/2024-07:46:47] [I] Enable Compilation Cache: Enabled
[05/27/2024-07:46:47] [I] errorOnTimingCacheMiss: Disabled
[05/27/2024-07:46:47] [I] Preview Features: Use default preview flags.
[05/27/2024-07:46:47] [I] MaxAuxStreams: -1
[05/27/2024-07:46:47] [I] BuilderOptimizationLevel: -1
[05/27/2024-07:46:47] [I] Calibration Profile Index: 0
[05/27/2024-07:46:47] [I] Weight Streaming: Disabled
[05/27/2024-07:46:47] [I] Debug Tensors:
[05/27/2024-07:46:47] [I] Input(s)s format: fp32:CHW
[05/27/2024-07:46:47] [I] Output(s)s format: fp32:CHW
[05/27/2024-07:46:47] [I] Input build shapes: model
[05/27/2024-07:46:47] [I] Input calibration shapes: model
[05/27/2024-07:46:47] [I] === System Options ===
[05/27/2024-07:46:47] [I] Device: 0
[05/27/2024-07:46:47] [I] DLACore:
[05/27/2024-07:46:47] [I] Plugins:
[05/27/2024-07:46:47] [I] setPluginsToSerialize:
[05/27/2024-07:46:47] [I] dynamicPlugins:
[05/27/2024-07:46:47] [I] ignoreParsedPluginLibs: 0
[05/27/2024-07:46:47] [I]
[05/27/2024-07:46:47] [I] === Inference Options ===
[05/27/2024-07:46:47] [I] Batch: Explicit
[05/27/2024-07:46:47] [I] Input inference shapes: model
[05/27/2024-07:46:47] [I] Iterations: 10
[05/27/2024-07:46:47] [I] Duration: 3s (+ 200ms warm up)
[05/27/2024-07:46:47] [I] Sleep time: 0ms
[05/27/2024-07:46:47] [I] Idle time: 0ms
[05/27/2024-07:46:47] [I] Inference Streams: 1
[05/27/2024-07:46:47] [I] ExposeDMA: Disabled
[05/27/2024-07:46:47] [I] Data transfers: Enabled
[05/27/2024-07:46:47] [I] Spin-wait: Disabled
[05/27/2024-07:46:47] [I] Multithreading: Disabled
[05/27/2024-07:46:47] [I] CUDA Graph: Disabled
[05/27/2024-07:46:47] [I] Separate profiling: Disabled
[05/27/2024-07:46:47] [I] Time Deserialize: Disabled
[05/27/2024-07:46:47] [I] Time Refit: Disabled
[05/27/2024-07:46:47] [I] NVTX verbosity: 0
[05/27/2024-07:46:47] [I] Persistent Cache Ratio: 0
[05/27/2024-07:46:47] [I] Optimization Profile Index: 0
[05/27/2024-07:46:47] [I] Weight Streaming Budget: Disabled
[05/27/2024-07:46:47] [I] Inputs:
[05/27/2024-07:46:47] [I] Debug Tensor Save Destinations:
[05/27/2024-07:46:47] [I] === Reporting Options ===
[05/27/2024-07:46:47] [I] Verbose: Disabled
[05/27/2024-07:46:47] [I] Averages: 10 inferences
[05/27/2024-07:46:47] [I] Percentiles: 90,95,99
[05/27/2024-07:46:47] [I] Dump refittable layers:Disabled
[05/27/2024-07:46:47] [I] Dump output: Disabled
[05/27/2024-07:46:47] [I] Profile: Disabled
[05/27/2024-07:46:47] [I] Export timing to JSON file:
[05/27/2024-07:46:47] [I] Export output to JSON file:
[05/27/2024-07:46:47] [I] Export profile to JSON file:
[05/27/2024-07:46:47] [I]
[05/27/2024-07:46:47] [I] === Device Information ===
Cuda failure: named symbol not found
Environment
TensorRT Version:10.0
NVIDIA GPU:4060Ti
NVIDIA Driver Version:12.5
CUDA Version:12.4
CUDNN Version:8.9.7