StabilityMatrix icon indicating copy to clipboard operation
StabilityMatrix copied to clipboard

TensorRT Extension don't find files

Open Giribot opened this issue 10 months ago • 0 comments

TensorRT Extension: (use in Stable Diffusion WebUI Forge)

this extension is offered for all forks of "Stable Diffusion WebUI" via the extensions to be installed in the "Stable Diffusion WebUI". (here: https://github.com/AUTOMATIC1111/stable-diffusion-webui-tensorrt)

This extension makes it possible to considerably improve the speed of calculations of the differential equations necessary for fasting the creation of our images via an extension offered by Nvidia for all its graphics cards in the "NVIDIA GeForce, NVIDIA RTX" family (and doesn't work if you don't have these cards +++) and works under Windows and Linux.

I scrupulously followed the installation described in this GitHub: https://github.com/AUTOMATIC1111/stable-diffusion-webui-tensorrt

_"How to install Apart from installing the extension normally, you also need to download zip with TensorRT from NVIDIA.

You need to choose the same version of CUDA as python's torch library is using. For torch 2.0.1 it is CUDA 11.8.

Extract the zip into extension directory, so that TensorRT-8.6.1.6 (or similarly named dir) exists in the same place as **scripts directory and trt_path.py file **. Restart webui afterwards.

You don't need to install CUDA separately.

How to use Select the model you want to optimize and make a picture with it, including needed loras and hypernetworks. Go to a TensorRT tab that appears if the extension loads properly. In Convert to ONNX tab, press Convert Unet to ONNX. This takes a short while. After the conversion has finished, you will find an .onnx file with model in models/Unet-onnx directory. In Convert ONNX to TensorRT tab, configure the necessary parameters (including writing full path to onnx model) and press Convert ONNX to TensorRT. This takes very long - from 15 minutes to an hour. This takes up a lot of VRAM: you might want to press "Show command for conversion" and run the command yourself after shutting down webui. After the conversion has finished, you will find a .trt file with model in models/Unet-trt directory. In settings, in Stable Diffusion page, use SD Unet option to select newly generated TensorRT model. Generate pictures."_

it doesn't work too well I think because of the shared folders which are halfway between the package, the package extension and stability matrix (the place where the TensorRT file should be located, the models, the generated models are difficult identifiable because of the sharing of files (I have a migraine!)

Nvidia has forked Stable Diffusion WebUI here with tensorRT Here: (for testing) https://github.com/NVIDIA/Stable-Diffusion-WebUI-TensorRT It's working !

In short, it doesn't work in Stablity Matrix Yet

Stability Matrix console error message:

"Model loaded in 19.5s (load weights from disk: 0.3s, forge load real models: 17.5s, load VAE: 0.5s, calculate empty prompt: 1.1s). Exporting v1-5-pruned-emaonly to TensorRT using - Batch Size: 1-1-4 Height: 512-512-768 Width: 512-512-768 Token Count: 75-75-150 ERROR:root:Exporting to ONNX failed. Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm) Building TensorRT engine... This can take a while, please check the progress in the terminal. Building TensorRT engine for D:\Data\Packages\Stable Diffusion WebUI Forge\models\Unet-onnx\v1-5-pruned-emaonly.onnx: D:\Data\Packages\Stable Diffusion WebUI Forge\models\Unet-trt\v1-5-pruned-emaonly_d7049739_cc86_sample=1x4x64x64+2x4x64x64+8x4x96x96-timesteps=1+2+8-encoder_hidden_states=1x77x768+2x77x768+8x154x768.trt Could not open file D:\Data\Packages\Stable Diffusion WebUI Forge\models\Unet-onnx\v1-5-pruned-emaonly.onnx Could not open file D:\Data\Packages\Stable Diffusion WebUI Forge\models\Unet-onnx\v1-5-pruned-emaonly.onnx [W] 'colored' module is not installed, will not use colors when logging. To enable colors, please install the 'colored' module: python3 -m pip install colored [E] ModelImporter.cpp:773: Failed to parse ONNX model from file: D:\Data\Packages\Stable Diffusion WebUI Forge\models\Unet-onnx\v1-5-pruned-emaonly.onnx [!] Failed to parse ONNX model. Does the model file exist and contain a valid ONNX model? Traceback (most recent call last): File "D:\Data\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "D:\Data\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "D:\Data\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "D:\Data\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "D:\Data\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "D:\Data\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, *args) File "D:\Data\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(*args, **kwargs) File "D:\Data\Packages\Stable Diffusion WebUI Forge\extensions\Stable-Diffusion-WebUI-TensorRT\ui_trt.py", line 126, in export_unet_to_trt ret = export_trt( File "D:\Data\Packages\Stable Diffusion WebUI Forge\extensions\Stable-Diffusion-WebUI-TensorRT\exporter.py", line 231, in export_trt ret = engine.build( File "D:\Data\Packages\Stable Diffusion WebUI Forge\extensions\Stable-Diffusion-WebUI-TensorRT\utilities.py", line 227, in build network = network_from_onnx_path( File "", line 3, in network_from_onnx_path File "D:\Data\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\polygraphy\backend\base\loader.py", line 40, in call return self.call_impl(*args, **kwargs) File "D:\Data\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\polygraphy\util\util.py", line 710, in wrapped return func(*args, **kwargs) File "D:\Data\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\polygraphy\backend\trt\loader.py", line 247, in call_impl trt_util.check_onnx_parser_errors(parser, success) File "D:\Data\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\polygraphy\backend\trt\util.py", line 88, in check_onnx_parser_errors G_LOGGER.critical( File "D:\Data\Packages\Stable Diffusion WebUI Forge\venv\lib\site-packages\polygraphy\logger\logger.py", line 605, in critical raise ExceptionType(message) from None polygraphy.exception.exception.PolygraphyException: Failed to parse ONNX model. Does the model file exist and contain a valid ONNX model?

Thanks you !

Giribot avatar Apr 16 '24 10:04 Giribot