stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: Images are messed up in the last generation step(s) (Euler a, Euler, LMS etc.)
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
With any of my models, generated images are screwed up in the last step(s). I can see the generation doing great when I run a script, outputting every step until the last steps. Then it is as if there was a sort of sharpening taking place in certain places, most noticeably faces. I see sharpening, but it is more like distorting. In LMS this effect is most apparent, because there the problem areas are just made into glitchy mosaic in the last steps.
Steps to reproduce the problem
- Start stable diffusion
- Choose Model
- Input prompts, set size, choose steps (doesn't matter how many, but maybe with fewer steps the problem is worse), cfg scale doesn't matter too much (within limits)
- Run the generation
- look at the output with step by step preview on.
What should have happened?
The last step should improve on the ones before, except now it tends to ruin what was building up beautifully.
Commit where the problem happens
645f4e7ef8c9d59deea7091a22373b2da2b780f2
What platforms do you use to access UI ?
Windows
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
--xformers
Additional information, context and logs
I haven't generated stuff for a while, and yesterday evening, I've updated webui to latest master, to try WD1.4 epoch 2.
And I've noticed that txt2img generation became way worse, even when trying to recreate past images. Interestingly enough with highres fix enabled, pictures look way better, and img2img works fine.
Something broke in past few months it seems.
Maybe check out #7077 ?
@AI-Casanova It also seems to do it with ckpts that are definitely not overcooked, it's just less apparent.
@bosbrand Just remembered that thread and thought it might give you a bit of extra info in your search. Cheers!
@AI-Casanova Thanks! I can try to train the exact same set again, with less steps and see how they compare.
I have this exact issue, 19/20 steps looks fine but as soon as it finalises the image gets distorted. almost like a broken VAE
I remember this happened when the VAE auto selection was bugged and was supposedly fixed, could be one week ago or even two. Not entirely sure if it just got better, or remained at the bugged-out stage, I think it got a bit better after the fix?.
edit: I made a new clean install and this time I put my VAE into the VAE folder instead just beside the models and everything is perfect now. My old install told me it loaded the VAE but probably didnt correctly. My new install however dont tell me it loaded a VAE but it works pretty well now (??? :D)
I've got the same issue.
I have the same issue. I ran some tests and even when checking out an earlier version of the codebase I could not reproduce the high quality images I had produced previously (even with identical prompt, settings, seed etc). The quality is much reduced.
It seems like the checkpoint/model has actually been damaged in some way; certainly the hash is different (compared to what was saved into PNG info previously). seems like a very serious bug.
haven't been able to exactly narrow down the conditions under which it happens, but seems to be more noticeable with .safetensors models than .ckpt files.
As an example here is an original image generated using a sample prompt that was in a SD tutorial, generated on 14th January with whatever latest a1111 code was at that time:
Here is today's attempt to recreate the same image with the same prompt, seed, and settings, using the same model file.
As you can see the quality is much worse.
The hi-res fix improves things a bit but still nothing like what we had before:
In order to regenerate the image at a quality similar to the original, I had to redownload checkpoint models, re-do merges, and re-create the model. I was then able to generate something that looked similar quality to the original:
My conclusion is that something in a recent code update for a1111 saved changes to the model that permanently broke it.
Which makes this a much worse bug than the title says; it's actually about model corruption not just generating bad images.
i think it has to be VAE related
Same problem for me, weird faces since my last update
python: 3.10.6 • torch: 1.13.1+cu117 • xformers: 0.0.16rc425 • gradio: 3.16.2 • commit: [0a851508] • checkpoint: [13dfc9921f]
here is a comparison of before updating and after.
https://imgur.com/a/6qT3NY5
this is before.
this is after without a vae
https://imgur.com/a/W61Dg8U
This is after the update with the vae added back. you can see between the first and this one, the images generated are similar but not the same.
https://imgur.com/a/u3IZ4As
sorry about the multiple post. but is was the only way to add the pictures.
Another one where the problem is really visible: last and before last step:
I did a small experiment.
Created completely new install
Installed 7fd90128eb6d1820045bfe2c2c1269661023a712 from scratch (it's a few months old, the version I've used for a long time)
downloaded https://huggingface.co/hakurei/waifu-diffusion-v1-3/blob/main/wd-v1-3-float16.ckpt
generated 4 pictures
update to master 2c1bb46c7ad5b4536f6587d327a03f0ff7811c5d
running with --reinstall-torch
Generated again
Seems to be fine... Used negative and positive prompts, Euler a.
ok it was VAE :D
ok it was VAE :D
What do you mean? I have same problem. What I need to do with VAE?
It is not the vae, I have the problem with an without vae. I did a clean reinstall, put my models back, did a reinstall of torch and xformers, no cigar...
i'm not sure all the examples on here refer to the same issue. For example in the one I posted above it's not about the faces/VAE, the whole image is lower quality.
Seems to be fine... Used negative and positive prompts, Euler a.
It seems like there is some "corruption moment" that happens, that I and others have hit, that you haven't hit in your tests.
I'm pretty convinced it's a model corruption issue; I tried running the same generation on various commits of the codebase going back to December, and same bad results every time - the only thing that fixed it was redownloading and re-merging checkpoints/safetensors files - everything works fine after doing that, even on latest codebase.
@alexbfree you're right, I'm a little tired of folks muddling up this thread that don't even have the problem that we have.
--
@sinanisler why don't you fucking read the thread. That is NOT the problem we have. Shut up until you have figured out what the actual problem is.
I have figured out a nuance... When you use the save intermedate script there is a difference between saving denoised intermediate steps and saving according to the preview settings. The latter look way better, so there must be something going wrong in the denoising steps. When I look at the problem it looks like denoising steps are incomplete on a regular basis.
Okay, since people don't seem to read the entire thread or try to understand what the problem is a summary: -Images get messed up in the final steps
- the problem is not VAE related!
- reinstalling the webui, torch and xformers doesn't help.
- If you pull the results of the intermediate steps there is a difference between pulling denoised steps (bad quality) and according full preview (better).
@alexbfree provided evidence that models get messed up. How can that happen and how can it be prevented? How can a model change if not in training?
@bosbrand what if it is something to do with VAE but not in the normal sense? yes i have the same problem with or without VAE on but maybe it's something in the automatic code related to VAE. 19/20 steps is fine and then its almost like the cfg goes to 1000 for the last step, seems like the same time a VAE would kick in.
I'm pretty convinced it's a model corruption issue; I tried running the same generation on various commits of the codebase going back to December, and same bad results every time - the only thing that fixed it was redownloading and re-merging checkpoints/safetensors files - everything works fine after doing that, even on latest codebase.
This would be pretty easy to check then. Just look at last changed date of given file, try using it with latest commit and see it it changes.
Or are you implying the model is not loaded correctly to memory?
Personally, I think this also might be an issue with fp16 models if there was a change in computation of result. Another thing, there is default VAE now, what if it is added to models that doesn's specify one?
noticed the same problem thing is, if you set the settings of the live preview to the "combined" instead of the default "prompt", it became to show the result kinda close to finish one. but. results shown with the prompt setting usually significantly better and seems like there is no way to duplicate this result as finish one
@Mich-666 what is this default VAE? how do we turn it off and where did you find this out? a hidden vae in the background somewhere sounds exactly like what's going on