InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

[bug]: Gets randomly stuck at "Generating Running VAE Decoder"

Open adoermenen opened this issue 10 months ago • 3 comments

Is there an existing issue for this problem?

  • [x] I have searched the existing issues

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

RTX 4090

GPU VRAM

24GB

Version number

5.6.0

Browser

Microsoft Edge 131.0.2903.146

Python dependencies

accelerate 1.0.1 compel 2.0.2 cuda 12.4 diffusers 0.31.0 numpy 1.26.3 opencv 4.9.0.80 onnx 1.16.1 pillow 10.2.0 python 3.11.11 torch 2.4.1+cu124 torchvision 0.19.1+cu124 transformers 4.46.3 xformers Not Installed

What happened

Invoke will occassionally get stuck at "Generating Running VAE Decoder" when attempting to generate. GPU-load will go to 100% when this happens and I need to restart Invoke to break out of it.

This started occuring immediately after upgrading from 5.5.0 to 5.6.0. So far it feels like this only happens after changing model/checkpoint although I haven't confirmed this.

What you expected to happen

For generating to work without getting stuck

How to reproduce the problem

No confirmed steps to reproduce, although it feels like frequent model/checkpoint changes increases the likelihood of problem occuring

Additional context

No response

Discord username

No response

adoermenen avatar Jan 25 '25 20:01 adoermenen

Having the same exact issue to a literal t. Tried enabling the low vram parameters in yaml file and it didn't fix anything. EDIT: Thought it was maybe an issue with the models I was using. Same issues with very popular stable diffusion models like Juggernaut XL

cookiecleric avatar Feb 10 '25 23:02 cookiecleric

Also having this same exact issue. VAE decoder stalls after running several prompts. Have to shutdown Invoke and reopen.

KMX415 avatar Feb 11 '25 23:02 KMX415

i have this issue too, the issue seems to be with the model caching system, this problem comes in when you keep switching models or loras, i notice that the vram usage keep going up when the models are switched, then its the memory limit. The application is not actually getting stuck its that its running out of VRAM and running the VAE from the CPU ram which causes it to be very slow for us this seems like it is stuck. If you leave it for a minute, it will cause the image to be complete. But instead i just prefer to close the app which causes the cache to be cleaned and restarting with a fresh slate.

niksad8 avatar Mar 08 '25 17:03 niksad8

Have not experienced this issue for quite some time, closing issue

adoermenen avatar Sep 09 '25 14:09 adoermenen