stable-diffusion-webui-forge
stable-diffusion-webui-forge copied to clipboard
[Bug]: Turbo/SGMUniform + batch size generates different outcome with the same seed
Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
What happened?
So, if I use any of the turbo or SGMUniform sampling methods and I use a batch size it will mess with the seed, here's an example:
Seed: 1, batch size 2, Euler A Turbo
This will give me 2 images with seed 1 and seed 2 (no prompt)
And then Seed: 2, batch size 1, Euler A Turbo
This should give me the 2nd image (the colorful one)
And they are often quite close, its as if the seed is one off but its the same seed... the PNG info is the same
Steps to reproduce the problem
Easy test:
What should have happened?
The 2nd image should've been the same since its the same seed
What browsers do you use to access the UI ?
Google Chrome
Sysinfo
Console logs
2024-03-12 21:56:34,706 - ControlNet - INFO - ControlNet UI callback registered.
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
model_type EPS
UNet ADM Dimension 2816
Startup time: 7.8s (prepare environment: 1.7s, import torch: 2.1s, import gradio: 0.6s, setup paths: 0.5s, other imports: 0.3s, load scripts: 1.2s, create ui: 0.4s, gradio launch: 1.0s).
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}
left over keys: dict_keys(['conditioner.embedders.0.logit_scale', 'conditioner.embedders.0.text_projection'])
To load target model SDXLClipModel
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 9072.99609375
[Memory Management] Model Memory (MB) = 2144.3546981811523
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 5904.641395568848
Moving model(s) has taken 0.40 seconds
Model loaded in 4.5s (load weights from disk: 0.5s, forge instantiate config: 0.8s, forge load real models: 2.3s, load VAE: 0.3s, calculate empty prompt: 0.5s).
To load target model SDXL
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 9016.80712890625
[Memory Management] Model Memory (MB) = 4897.086494445801
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 3095.720634460449
Moving model(s) has taken 1.34 seconds
100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:01<00:00, 4.51it/s]
To load target model AutoencoderKL███████████████████████████████████████████████████████| 5/5 [00:00<00:00, 6.16it/s]
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 4012.14990234375
[Memory Management] Model Memory (MB) = 159.55708122253418
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 2828.592821121216
Moving model(s) has taken 0.06 seconds
Total progress: 100%|████████████████████████████████████████████████████████████████████| 5/5 [00:01<00:00, 3.17it/s]
To load target model SDXLClipModel███████████████████████████████████████████████████████| 5/5 [00:01<00:00, 6.16it/s]
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 3838.0810546875
[Memory Management] Model Memory (MB) = 2144.3546981811523
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 669.7263565063477
Moving model(s) has taken 0.44 seconds
100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 8.16it/s]
Total progress: 100%|████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 7.40it/s]
Total progress: 100%|████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 11.45it/s]
Additional information
No response
I'm not sure if its a typo but one of the checkboxes says
The issue is caused by an extension, but I believe it is caused by a bug in the webui
Should it instead say "The issue is NOT caused"?
Can anyone else replicate this?
replicated, same issue
but I believe that this is not related to turbo/sgm_uniform, but to some issue with global rng in forge-added samplers, here is my observations:
- DPM++ 2M Turbo / DPM++ 2M SGMUniform have no problem, and they are all deterministic
- Euler A Turbo / LCM Karras / Euler SGMUniform / Euler A SGMUniform have the issue, and they are all stochastic
Special: the problem of DPM++ 2M SDE Turbo / DPM++ 2M SDE SGMUniform is pretty obvious in code since SDE scheduler should use a pre-initialized noise_sampler, so that each image in batch will have their own rng:
https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/bef51aed032c0aaa5cfd80445bc4cf0d85b408b5/modules/sd_samplers_kdiffusion.py#L219
if self.config.options.get('brownian_noise', False):
noise_sampler = self.create_noise_sampler(x, sigmas, p)
extra_params_kwargs['noise_sampler'] = noise_sampler
while forge-added SDE samplers didn't correctly set the brownian_noise option:
https://github.com/lllyasviel/stable-diffusion-webui-forge/blob/29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7/modules_forge/forge_alter_samplers.py#L39
https://github.com/lllyasviel/stable-diffusion-webui-forge/blob/29be1da7cf2b5dccfc70fbdd33eb35c56a31ffb7/modules_forge/forge_alter_samplers.py#L44
like those in original A1111 webui:
https://github.com/AUTOMATIC1111/stable-diffusion-webui/blob/bef51aed032c0aaa5cfd80445bc4cf0d85b408b5/modules/sd_samplers_kdiffusion.py#L25
('DPM++ 2M SDE', 'sample_dpmpp_2m_sde', ['k_dpmpp_2m_sde_ka'], {"brownian_noise": True}),
by adding the missing "brownian_noise": True, the two SDE samplers can be fixed
Can anyone else replicate this?
Yes, I just experienced the same. I tested with different samplers since Forge provides so many, and found that Euler A Turbo is one of the much better fast samplers for Pony Diffusion v6. I used a batch size of 4 for the generations, and I was unable to reproduce the images again. The best workaround is to use a batch count of 4 and leaving batch size at 1. But basically, this bug is a repeat of similar issues in the past.
replicated, same issue
but I believe that this is not related to Turbo / SGMUniform, but to some issue with global RNG in forge-added samplers. Here are my observations:
- DPM++ 2M Turbo / DPM++ 2M SGMUniform have no problem, and they are all deterministic
- Euler A Turbo / LCM Karras / Euler SGMUniform / Euler A SGMUniform have the issue, and they are all stochastic
Special: the problem of DPM++ 2M SDE Turbo / DPM++ 2M SDE SGMUniform is pretty obvious in code since SDE scheduler should use a pre-initialized
noise_sampler, so that each image in batch will have their own RNG, while forge-added SDE samplers didn't correctly set thebrownian_noiseoption. So by adding the missing"brownian_noise": True, the two SDE samplers can be fixed.
I apologize for reviving an older thread, but I noticed this bug in Forge last night while trying to replicate a pretty rad photo I'd created earlier during some sampler "experimentation". The result was the same bug everyone here's been reporting; all generation parameters are identical, including seed, but DPM++ 2M SDE Turbo/SGMUniform never reproduce the same image (meaning, for the same seed, the image is always different, non-deterministic, stochastic, and so on), which is NOT normal behavior.
So a quick Google of this phenomenon led me here where, after reading all replies, I was able to repair the DPM++ 2M SDE Turbo/SGMUniform samplers per your instructions, @SLAPaper. Here are my changes:
.../modules_forge/forge_alter_samplers.py: LINE 39
sd_samplers_common.SamplerData('DPM++ 2M SDE Turbo', build_constructor(sampler_name='dpmpp_2m_sde', scheduler_name='turbo'), ['dpmpp_2m_sde_turbo'], {"brownian_noise": True}),
.../modules_forge/forge_alter_samplers.py: LINE 44
sd_samplers_common.SamplerData('DPM++ 2M SDE SGMUniform', build_constructor(sampler_name='dpmpp_2m_sde', scheduler_name='sgm_uniform'), ['dpmpp_2m_sde_sgm_uniform'], {"brownian_noise": True}),
Your fixes for these two samplers can be verified as a functional solution to the bug/issue! However, now, when I use the blue and white arrow button in WebUI below "Generate" to recall my previous prompts/settings, and my last used sampler was one of the samplers from (what I can tell) forge_alter_samplers.py, A new field pops up in UI below the "Seed" field(s), called "Override Settings", with a box in the field that says "RNG: GPU". It has an "X" to close it and get rid of it, which I've been doing, but I've tested Forge with and without closing it, and it seems to have no effect on generation and/or reproducibility either way. So I have no clue what that's all about.
Otherwise, your fix is successful! Thank you very much, @SLAPaper!
Now, if only we can fix the other affected samplers, we'll be golden. Anyone privy on how to do that?
...Dom