stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: No image preview during training
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
Whenever I'm training I hypernetwork, I get this:
No preview. It's generating them, it just doesn't show them - I have to load a separate image viewer to view the output directory to see my progress.
It has never worked for me. Every other part of SD works fine, incl. generating images in txt2img, img2img, etc - they all show without problems.
My suspicion is my initiation parameters, particularly --listen or --hide-ui-dir-config.
--listen --port
Does anyone else experience this?
Steps to reproduce the problem
- Start AUTOMATIC1111 with the above parameters.
- Train a hypernetwork, any hypernetwork.
What should have happened?
Show the preview
Commit where the problem happens
6bd6154a92eb05c80d66df661a38f8b70cc13729
What platforms do you use to access UI ?
Linux
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
--listen --port <myport> --gradio-auth <myauth> --hide-ui-dir-config
Additional information, context and logs
No response
Is this a bug? I thought it was done that way on purpose. At least now the "CUDA out of memory" error no longer appears (while trying to show preview).
if you fix this, please add an option to disable previews during training
"CUDA out of memory" AFAIK has nothing to do with previews, but with not restarting the server after interrupting training runs. There's a memory leak. I can get one interrupt and continue training without problems, but after two it's hosed.
Also, I've heard that interrupting, then switching and training something else without a restart leads to data corruption. Cannot verify, but I have seen some funky stuff that makes me wonder.
@enn-nafnlaus In the Settings tab, under User interface, try switching "Show image creation progress every N sampling steps" away from 0. I'm on Windows but this did the trick for me.
@timntorres Thanks - that hack works. :) This is still a bug, though.