voltaML-fast-stable-diffusion
voltaML-fast-stable-diffusion copied to clipboard
error with using optimize.sh
Traceback (most recent call last):
File "volta_accelerate.py", line 153, in
What is your environment? GPU, CUDA, CuDNN etc? Did all the installations go properly without errors?
What is your environment? GPU, CUDA, CuDNN etc? Did all the installations go properly without errors?
thanks for reply, my environment is:
NVIDIA-SMI 460.106.00 Driver Version: 460.106.00 CUDA Version: 11.2
GPU is Tesla T4
$ nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Sun_Feb_14_21:12:58_PST_2021 Cuda compilation tools, release 11.2, V11.2.152 Build cuda_11.2.r11.2/compiler.29618528_0
CUDA, CuDNN is 11.2, and all the installations without errors.
and this is the whole error log:
Traceback (most recent call last):
File "volta_accelerate.py", line 152, in
[12/11/2022-12:59:07] [TRT] [I] [MemUsageChange] Init CUDA: CPU +311, GPU +0, now: CPU 386, GPU 1407 (MiB)
[12/11/2022-12:59:10] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +263, GPU +74, now: CPU 702, GPU 1481 (MiB)
[12/11/2022-12:59:10] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING
in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
Could not open file ./unet/model.onnx
Could not open file ./unet/model.onnx
[12/11/2022-12:59:10] [TRT] [E] ModelImporter.cpp:688: Failed to parse ONNX model from file: ./unet/model.onnx
ONNX model parsing failed
This issue is stale because it has been open for 30 days with no activity.
Closing as TensorRT support was deprecated in favor of AITemplate