ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

ComfyUI get's stuck after clicking queue prompt

Open g0dzcsgo opened this issue 1 year ago • 5 comments

Your question

Okay so I am trying to run FLUX locally using ComfyUI, I am using the flux.1 dev, and the t5xxl_fp16 clip

My GPU is: RTX 3080 10GB VRAM

And I also just downloaded the latest version of comfyui that was posted 10 hours ago when writing this. I am using the portable version.

Logs

Starting server

To see the GUI go to: http://127.0.0.1:8188
got prompt
model weight dtype torch.bfloat16, manual cast: None
model_type FLUX

Other

No response

g0dzcsgo avatar Aug 10 '24 12:08 g0dzcsgo

Same problem here.

I tried debugging and see that the program terminates without any error message at layers.py, line 141

self.txt_mlp = nn.Sequential( operations.Linear(hidden_size, mlp_hidden_dim, bias=True, dtype=dtype, device=device), nn.GELU(approximate="tanh"), operations.Linear(mlp_hidden_dim, hidden_size, bias=True, dtype=dtype, device=device), )

Wrapping the code in a try-except didn't help. It just terminates without reaching the except block.

CR82 avatar Aug 10 '24 15:08 CR82

Same problem when I download flux first time work well but after updating comfy it takes forever to compute, load diffusion model takes forever ever and stuck there.

kakachiex2 avatar Aug 10 '24 19:08 kakachiex2

Same problem here.

I tried debugging and see that the program terminates without any error message at layers.py, line 141

self.txt_mlp = nn.Sequential( operations.Linear(hidden_size, mlp_hidden_dim, bias=True, dtype=dtype, device=device), nn.GELU(approximate="tanh"), operations.Linear(mlp_hidden_dim, hidden_size, bias=True, dtype=dtype, device=device), )

Wrapping the code in a try-except didn't help. It just terminates without reaching the except block.

It could be an issue occurring in torch's native code. Try downgrading it once.

ltdrdata avatar Aug 11 '24 06:08 ltdrdata

Have you tried running the fp8 versions of the model and clip?

GalaxyTimeMachine avatar Aug 11 '24 08:08 GalaxyTimeMachine

My Rig RTX260 6gb V-Ram 64gb Ram, and it works in my Rig but take like 1 hour to load the model but this behaviuo is after update ComfyUI before it takes from 20 to 30 min to load, I'm using Kijai fp8 model.

kakachiex2 avatar Aug 12 '24 02:08 kakachiex2

This issue is being marked stale because it has not had any activity for 30 days. Reply below within 7 days if your issue still isn't solved, and it will be left open. Otherwise, the issue will be closed automatically.

github-actions[bot] avatar Mar 05 '25 11:03 github-actions[bot]