The ComfyUI flux worked and now it does not anymore
Expected Behavior
Well on this machine RTX 3080 Ti, with 32 GB ram the flux model worked fine, and now it does not work the ComfyUI crashes without any message in cmd, was there a change that change the compatiblity with this kind of the systems what should be the next steps to fix this bug ?
Actual Behavior
Crash without a message in CMD
Steps to Reproduce
Well you need RTX 3080 Ti, Windows 10 PC , 32 Gb of Ram and start and generate the workflow
Debug Logs
Where is this log ?
Other
yea it's a bug so what todo next ?
Well I did change to rename to safe tensors and still the same crash ComfyUI crashed
Well is foes not work even with other model flux model the loader is broken
I've encountered the same issue with the workflow above.
Did you download the text encoder, vae, and diffusion models from the same sources as what is recommended in https://comfyanonymous.github.io/ComfyUI_examples/flux/?
flux 风格迁移.json I've encountered the same issue with the workflow above.
Did you download the text encoder, vae, and diffusion models from the same sources as what is recommended in https://comfyanonymous.github.io/ComfyUI_examples/flux/?
No, they are not from the same source, but the workflow worked well just before the update.
Try replacing with the official source.
git checkout v0.3.28 rollback to last stable version
Try replacing with the official source.
Would you like me to test the nodes or workflow using the official models to check if any issues remain, or are you indicating that ComfyUI plans to support only Flux's official models and does not intend to support unofficial models (even though they were functional in previous versions)? But to produce images matching my intended style, I rely on certain unofficial models that provide the necessary output.
To provide additional details, in this workflow:
- official resources
- t5xxl_fp8_e4m3fn.safetensors
- clip_l.safetensors
- ae.safetensors (renamed to flux_ae.safetensors)
- unofficial resources
- the flux model: MR FLUX_FLUX.safetensors (crashed within the
LoraLoaderModelOnlynode) - google_siglip.safetensors (in the CLIPVisionLoader node)
- the flux model: MR FLUX_FLUX.safetensors (crashed within the
git checkout v0.3.28 rollback to last stable version
While this might temporarily make my workflow work again for now, I’d like to know — when would it be safe to upgrade to the new version? Or is staying on the old version the only long-term solution?
The latest version has fixed my issue. Thanks, all!
"Reconnecting" means that your ComfyUI was forcibly terminated. This usually happens when your operating system force-closes ComfyUI due to insufficient system memory.
Try increasing your pagefile.
"Reconnecting" means that your ComfyUI was forcibly terminated. This usually happens when your operating system force-closes ComfyUI due to insufficient system memory. Try increasing your pagefile.
I was redirected here from the comfyui website and hope to get help. The above quote may apply to me.
I have downloaded and placed the HiDream safetendors and encoders in the appropriate folders and was able to generate images after starting ComfyUI. I was able to generate a number of images, albeit quite slow. I'm using 8gb of vram and 32gb ram. Probably slightly insufficient to run the model, but as i said, it worked.
After shutting the pc down and restarting (not connected to the internet, so no updates) I tried generating images again, and I get the reconnect error. The command prompt shows that it has paused and to press any key to continue, which just closes the command prompt window. I am able to run flux1-dev (although sometimes with the same error) and I can always run SD3.5 and Realistic Vision 5.1.
Is the reconnect error just due to insufficient ram? How do I increase the pagefile? I'm a complete newbie...
"Reconnecting" means that your ComfyUI was forcibly terminated. This usually happens when your operating system force-closes ComfyUI due to insufficient system memory. Try increasing your pagefile.
I was redirected here from the comfyui website and hope to get help. The above quote may apply to me.
I have downloaded and placed the HiDream safetendors and encoders in the appropriate folders and was able to generate images after starting ComfyUI. I was able to generate a number of images, albeit quite slow. I'm using 8gb of vram and 32gb ram. Probably slightly insufficient to run the model, but as i said, it worked.
After shutting the pc down and restarting (not connected to the internet, so no updates) I tried generating images again, and I get the reconnect error. The command prompt shows that it has paused and to press any key to continue, which just closes the command prompt window. I am able to run flux1-dev (although sometimes with the same error) and I can always run SD3.5 and Realistic Vision 5.1.
Is the reconnect error just due to insufficient ram? How do I increase the pagefile? I'm a complete newbie...
You can use this method if you are Windows user. https://learn.microsoft.com/en-us/troubleshoot/windows-client/performance/introduction-to-the-page-file
Had the same issue here (32gb ram and 12gb vram). But 2 strange thinks happen: 1 - before running, hidream model downloaded, on correct folder, but when clic on Load Diffusion model, only other flux models appear to be selected, even after selecting one of those, clicking again does not show hidream. 2 - after Reconnecting, need to close ComfyUI and open up again, not sure if this is expected
Had the same issue here (32gb ram and 12gb vram). But 2 strange thinks happen: 1 - before running, hidream model downloaded, on correct folder, but when clic on Load Diffusion model, only other flux models appear to be selected, even after selecting one of those, clicking again does not show hidream. 2 - after Reconnecting, need to close ComfyUI and open up again, not sure if this is expected
![]()
You don't need to restart ComfyUI, but you do need to refresh the browser or reload the node definitions from the menu.
And yes, it's intentional that this step is not performed automatically. When it runs, any currently selected model that no longer exists gets deselected, and a random available model is selected instead. To avoid losing track of which model was originally selected, this action should only be performed with the user's confirmation.