[Feature] Support TensorRT
Issue Description
This extension fails to complete installation/launch on SD.NEXT.
Error message documented here (similar error in newest master branch after a fresh install): https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/issues/8 "It seems like this could be easily resolved on the SD.Next side by accepting kwargs. Or having API parity with AUTO1111.
See here: https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/blob/main/install.py#L8,L13,L19"
Version Platform Description
Windows 11, Firefox
Using VENV: C:\sdnext\venv 08:28:52-773330 INFO Starting SD.Next 08:28:52-779610 INFO Python 3.10.6 on Windows 08:28:53-023072 INFO Version: app=sd.next updated=2023-10-17 hash=379fe1f3 url=https://github.com/vladmandic/automatic.git/tree/master 08:28:53-294609 INFO Platform: arch=AMD64 cpu=Intel64 Family 6 Model 183 Stepping 1, GenuineIntel system=Windows release=Windows-10-10.0.22621-SP0 python=3.10.6 08:28:53-297608 INFO nVidia CUDA toolkit detected: nvidia-smi present
URL link of the extension
https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT
URL link of the issue reported in the extension repository
https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT/issues/8
Acknowledgements
- [X] I have read the above and searched for existing issues
this is a feature request to support tensorrt in general. installing such extension is not even remotely expected to work on its own.
closing here as work will be tracked via https://github.com/users/vladmandic/projects/2?pane=issue&itemId=42264921
This got flagged as "Reject" in the project thingy, but there is no explanation for why. Further Clarification would be appreciated
i'll reopen this feature request.
every tensorrt implementation i've seen is very fragile: installation of tensorrt itself is very messy up to a point where nvidia recommends using a sparate container just to have it truly working. and then the fact that models needs to be compiled and frozen. all-in-all, i'm fully open to contributions, but i don't think i will prioritize the work myself.