Commandline arg for disabling "torch.backends.cudnn.benchmark"
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What would your feature do ?
'torch.backends.cudnn.benchmark = True' in devices.py can cause inconsistent results when re-launching the webUI. (as described in https://pytorch.org/docs/stable/notes/randomness.html)
PNGinfo > txt2img can fail to produce a identical result. Smaller, uncomplicated promts have low chance to fail, complex promts, especially if they exceed token limit or have high emphasis are nearly impossible to reproduce 2nd time.
Integrate a command line arg to disable the benchmarking backend.
Proposed workflow
- Go to ....
- Press ....
- ...
Additional information
No response
if you look at devices.py, you'll see that cudnn.benchmark is NOT enabled for all, its enabled for a very specific HW only and for a reason (due to bug where those specific old cards falsy report no fp16 ops are possible). are you having that exact HW and still want to disable it?
I already did disable it in the .py. Though, it was the case for me causing non-deterministic behaviour (gtx 2080 super) upon relaunching the web ui. That I'm 100% sure off.
Thats bad if I want to XYZ re- sample a prevoius image for post processing reasons.
And since hardly anyone seems to be aware of that, I thouht it might be good idea to add a command line arg to disable it for sure.
Closing this in favor of https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12879 because it's actually a bug (I have two GPUs installed and the cuDNN check looks for anything with a compute cabability of 7.5). You are correct it shouldn't apply to the 20XX series either though.