voltaML-fast-stable-diffusion icon indicating copy to clipboard operation
voltaML-fast-stable-diffusion copied to clipboard

error with using optimize.sh

Open Ted-developer opened this issue 2 years ago • 2 comments

Traceback (most recent call last): File "volta_accelerate.py", line 153, in convert_to_onnx(args) File "volta_accelerate.py", line 79, in convert_to_onnx traced_model = torch.jit.trace( File "/home/work/python/lib/python3.8/site-packages/torch/jit/_trace.py", line 750, in trace return trace_module( File "/home/work/python/lib/python3.8/site-packages/torch/jit/_trace.py", line 967, in trace_module module._c._create_method_from_trace( File "/home/work/python/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/work/python/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1118, in _slow_forward result = self.forward(*input, **kwargs) TypeError: forward() takes from 4 to 5 positional arguments but 6 were given

Ted-developer avatar Dec 10 '22 11:12 Ted-developer

What is your environment? GPU, CUDA, CuDNN etc? Did all the installations go properly without errors?

VoltaML avatar Dec 10 '22 16:12 VoltaML

What is your environment? GPU, CUDA, CuDNN etc? Did all the installations go properly without errors?

thanks for reply, my environment is:

NVIDIA-SMI 460.106.00 Driver Version: 460.106.00 CUDA Version: 11.2
GPU is Tesla T4

$ nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2021 NVIDIA Corporation Built on Sun_Feb_14_21:12:58_PST_2021 Cuda compilation tools, release 11.2, V11.2.152 Build cuda_11.2.r11.2/compiler.29618528_0

CUDA, CuDNN is 11.2, and all the installations without errors.

and this is the whole error log:

Traceback (most recent call last): File "volta_accelerate.py", line 152, in convert_to_onnx(args) File "volta_accelerate.py", line 78, in convert_to_onnx traced_model = torch.jit.trace( File "/home/work/python/lib/python3.8/site-packages/torch/jit/_trace.py", line 750, in trace return trace_module( File "/home/work/python/lib/python3.8/site-packages/torch/jit/_trace.py", line 967, in trace_module module._c._create_method_from_trace( File "/home/work/python/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/work/python/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1118, in _slow_forward result = self.forward(*input, **kwargs) TypeError: forward() takes from 4 to 5 positional arguments but 6 were given

[12/11/2022-12:59:07] [TRT] [I] [MemUsageChange] Init CUDA: CPU +311, GPU +0, now: CPU 386, GPU 1407 (MiB) [12/11/2022-12:59:10] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +263, GPU +74, now: CPU 702, GPU 1481 (MiB) [12/11/2022-12:59:10] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See CUDA_MODULE_LOADING in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars Could not open file ./unet/model.onnx Could not open file ./unet/model.onnx [12/11/2022-12:59:10] [TRT] [E] ModelImporter.cpp:688: Failed to parse ONNX model from file: ./unet/model.onnx ONNX model parsing failed

Ted-developer avatar Dec 11 '22 05:12 Ted-developer

This issue is stale because it has been open for 30 days with no activity.

github-actions[bot] avatar Mar 20 '23 02:03 github-actions[bot]

Closing as TensorRT support was deprecated in favor of AITemplate

Stax124 avatar Mar 20 '23 13:03 Stax124