TensorRT
TensorRT copied to clipboard
defaultAllocator.cpp::nvinfer1::internal::DefaultAllocator::allocate::63] Error Code 1: Cuda Runtime (out of memory)
trafficstars
Change the input image size of ONNX model from 1 * 3 * 1024 * 1024 to 1 * 3 * 2000 * 2000, get an error when converting ONNX to trtmodel:
defaultAllocator.cpp::nvinfer1::internal::DefaultAllocator::allocate::63] Error Code 1: Cuda Runtime (out of memory).
Environment
cudnn cudnn8.4.1.50
cuda 11.2
TensorRT 8.4.2.4
Modifying the maxWorkspaceSize, either larger or smaller will not solve the problem. Is there any solution?
thanks
This looks like a bug, can you share the onnx here?
@zerollzeng ,thank you for your attention,here's the model: random_param_4_nvidia_test_2e3_2e3.onnx, please download.
thanks again
Thanks for reporting this, I've filed a internal bug to track it.
this will be fixed in TRT 8.5.