TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

[defaultAllocator.cpp::deallocateAsync::64] Error Code 1: Cuda Runtime (operation not supported)

Open AnnaTrainingG opened this issue 8 months ago • 5 comments

Description

when I convert onnx to tensorrt it alway error like: Error[1]: [defaultAllocator.cpp::deallocateAsync::64] Error Code 1: Cuda Runtime (operation not supported)XXX failure of TensorRT X.Y when running XXX on GPU XXX

trtexec --onnx=onnx_dir/quant_light.onnx

Environment

TensorRT Version: 10.0.1

[I] [TRT] ONNX IR version: 0.0.7 [I] [TRT] Opset version: 13 [I] [TRT] Producer name: pytorch [I] [TRT] Producer version: 2.2.1 [I] [TRT] Domain:
[I] [TRT] Model version: 0 [I] [TRT] Doc string:
NVIDIA GPU: RTX A6000 NVIDIA Driver Version: 470.129.06 CUDA Version: cuda11.8 CUDNN Version: 8.6.0

Operating System:

Python Version (if applicable):

Tensorflow Version (if applicable):

PyTorch Version (if applicable):

Baremetal or Container (if so, version):

Relevant Files

Model link:

Steps To Reproduce

Commands or scripts:

Have you tried the latest release?:

Can this model run on other frameworks? For example run ONNX model with ONNXRuntime (polygraphy run <model.onnx> --onnxrt):

AnnaTrainingG avatar Jul 04 '24 07:07 AnnaTrainingG