Bug: Flux Distilled CFG Scale sticks after first generation regardless of subsequent changes
When using the flux1-dev-bnb-nf4.safetensors model, the Distilled CFG Scale can only be set once. If I generate an image with a certain Distilled CFG Scale, changing that value and generating the same image (same seed and parameters) again results in identical image.
This is not a gradio issue. "Distilled CFG Scale: 1.2" and "Distilled CFG Scale: 0.8" appear in the resulting PNG file metadata, yet the two images are identical otherwise. Notably, I can change the "CFG" value, and that change is applied properly.
UPD: might still be a gradio issue. Flipping the UI switch from "flux" to "all" and back, then filling in resolution, CFG and Distilled CFG Scale again properly changes the "Distilled CFG Scale" value - yet again, after the first generation that chosen value sticks.
UPD2: flipping the UI switch does not work. Could've been a single random occasion.
Restarting Forge (and thus reloading the checkpoint) allows to set a different Distilled CFG Scale, yet that also only works once.
In the log below, the "Distilled CFG Scale: 0.8" appears twice, but only during the first generation. No such text in subsequent generations.
Model loaded in 2.4s (unload existing model: 0.2s, load state dict: 0.3s, forge model load: 1.9s). To load target model ModuleDict Begin to load 1 model [Memory Management] Current Free GPU Memory: 15016.94 MB [Memory Management] Required Model Memory: 5154.62 MB [Memory Management] Required Inference Memory: 1024.00 MB [Memory Management] Estimated Remaining GPU Memory: 8838.32 MB Moving model(s) has taken 5.53 seconds Distilled CFG Scale: 0.8 Distilled CFG Scale: 0.8 To load target model KModel Begin to load 1 model [Memory Management] Current Free GPU Memory: 14945.58 MB [Memory Management] Required Model Memory: 11350.07 MB [Memory Management] Required Inference Memory: 1024.00 MB [Memory Management] Estimated Remaining GPU Memory: 2571.51 MB Moving model(s) has taken 16.57 seconds 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:42<00:00, 2.15s/it] To load target model IntegratedAutoencoderKL███████████████████████████████████████████| 20/20 [00:41<00:00, 2.17s/it] Begin to load 1 model [Memory Management] Current Free GPU Memory: 14921.45 MB [Memory Management] Required Model Memory: 159.87 MB [Memory Management] Required Inference Memory: 1024.00 MB [Memory Management] Estimated Remaining GPU Memory: 13737.58 MB Moving model(s) has taken 3.85 seconds Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:45<00:00, 2.29s/it] To load target model KModel████████████████████████████████████████████████████████████| 20/20 [00:45<00:00, 2.17s/it] Begin to load 1 model [Memory Management] Current Free GPU Memory: 14919.45 MB [Memory Management] Required Model Memory: 11350.07 MB [Memory Management] Required Inference Memory: 1024.00 MB [Memory Management] Estimated Remaining GPU Memory: 2545.39 MB Moving model(s) has taken 2.26 seconds 100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:43<00:00, 2.15s/it] To load target model IntegratedAutoencoderKL███████████████████████████████████████████| 20/20 [00:41<00:00, 2.19s/it] Begin to load 1 model [Memory Management] Current Free GPU Memory: 14917.45 MB [Memory Management] Required Model Memory: 159.87 MB [Memory Management] Required Inference Memory: 1024.00 MB [Memory Management] Estimated Remaining GPU Memory: 13733.58 MB Moving model(s) has taken 1.63 seconds Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:43<00:00, 2.19s/it] Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:43<00:00, 2.19s/it]
I am experiencing a very similar but slightly worse version of this.
In my case, flipping from Flux to All and back again makes no difference. Distilled CFG will be unchanged despite changing the slider Flipping back to All, reloading UI and then flipping back to Flux again makes no difference. Distilled CFG will be unchanged despite changing the slider The only way for me to change the Distilled CFG Scale is to completely exit Forge and restart from scratch. Then I can use a different Distilled CFG. But just as the OP states, it is now stuck there and will not change again until a full reload, despite what the slider says in the UI.
Edit to add: I have just repeated the sequence of tests with flux1-dev-fp8.safetensors and have experienced the same issue.
Flipping the UI no longer fixes this for me either, so it might've been a random occasion. Distilled CFG Scale is currently stuck until full restart of the Forge.
Same here
Just to add a bit more info, switching to a different flux checkpoint corrects the issue without reloading Forge:
- generate an image using flux1-dev-bnb-nf4.safetensors and distilled CFG=1 (image A)
- change distilled CFG=3.5 and regenerate (image B)
- same image (A=B) but should be different
- change checkpoint to flux1-dev-fp8.safetensors and generate image
- image is clearly using correct distilled CFG, completely different in character (image C)
- change checkpoint back to flux1-dev-bnb-nf4.safetensors
- image is similar to image C and nothing like image B, as should have been the case in step 3.
Obviously changing checkpoint and generating an image is slower than restarting Forge, this is just to add(hopefully useful) info.
To speed up things, while waiting for the bug to be fixed: after generating the first image, change Distilled CFG and select any different checkpoint (even an old SDXL) and regenerate. As soon as the checkpoint is loaded and the image starts being built, press Skip and go back to flux1-dev-bnb-nf4.safetensors and generate. The new image will use the new Distilled CFG.
The same here, updated 10 min ago
Yep still happens
No fix for that? it's really very restrictive.
Apparently also changing Swap Method (from Queue to Asynch or viceversa) works. When you change Distilled CFG, if you want to see a different result, just remember to change also Swap Method. Again... just a faster method while waiting for a bux fix.
Apparently also changing Swap Method (from Queue to Asynch or viceversa) works. When you change Distilled CFG, if you want to see a different result, just remember to change also Swap Method. Again... just a faster method while waiting for a bux fix.
This works and is a really fast workaround for now. Thanks for the tip!
Recently fixed, please test and close.
Tested and fixed.
Thanks for fixing.
Works perfect now
Hmmm... this still seems to persist for me. No change between generated image when changing values. Using Flux FP8 safetensors with FP8 relevant clips and VAE etc. Any thoughts? Thanks.
Are you fully up to date? It's respecting changes to Distilled CFG for me, using FP8 Flux, FP8 T5xxl, clip_l and default Flux VAE.