Results 5 comments of Gangin Park

Confirmed that it resolves https://github.com/comfyanonymous/ComfyUI/issues/10891#issuecomment-3621595666.

Another OOM logs here @rattus128. Tested on RTX4090, using official workflow [here](https://docs.comfy.org/tutorials/flux/flux-2-dev) with reference images disabled. ``` got prompt Using pytorch attention in VAE Using pytorch attention in VAE VAE...

> Hey. So 32MB is very small in the context of flux 2. Comfy defines minimum inference VRAM and headrooms on the order of 100s of MB. If this is...

Thanks for looking in detail @rattus128. I have checked that work and can confirm that it also resolved the issue. Will close this if it's merged. To leave notes for...