stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: Got black image when trying to use the SD model 2.1
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
Got black image when trying to use the latest SD 2.1 model, even though I copy the v2-inference-v.yaml file and rename to the [model-name].yaml
Steps to reproduce the problem
as discribed above
What should have happened?
should generate image as prompted
Commit where the problem happens
44c46f0ed395967cd3830dd481a2db759fda5b3b
What platforms do you use to access UI ?
Linux
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
--api --listen --no-half-vae
Additional information, context and logs
No response
If you use --no-half
it will work, but then it also requires a lot more VRAM to generate larger images.
Same issue here, with Windows 10. :-(
--no-half --no-half-vae --api --listen
works for me...but...
https://github.com/Stability-AI/stablediffusion/commit/c12d960d1ee4f9134c2516862ef991ec52d3f59e seems relevant. we may need to export some environment variable to enable fp16
for 2.1
Use the v2-inference-v.yaml mentioned above. Use this file for the 768 model only, and the https://github.com/Stability-AI/stablediffusion/blob/main/configs/stable-diffusion/v2-inference.yaml (without -v) for the 512 model. Copy it besides your checkpoint file and give it the same name but with yaml extension.
Theoretically there shouldn't be an issue with using SD 2.1 if SD 2.0 already worked without --no-half
, so I'm not sure why its broken.
Solution here #5506
@miguelgargallo Adding --no-half
isn't really a PR worthy fix as it should work without that argument.
I did some more testing and I found another way to fix it!
If you enable xformers with --xformers
, then you don't have to use --no-half
!
You could try setting the following environment variable.
STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b"
and additionally if you want to use half-precision
ATTN_PRECISION=fp16
So for example for the webui-user.bat
set STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b"
set ATTN_PRECISION=fp16
This should checkout the stablediffusion-repository with the specified commit on the next launch. And "8bde0cf64f3735bb33d93bdb8e28120be45c479b" specifically is the commit that adds the ATTN_PRECISION environment variable (see https://github.com/Stability-AI/stablediffusion/commit/8bde0cf64f3735bb33d93bdb8e28120be45c479b).
Works for me, but my local fork is a bit diverged from the current master. So someone should retest this. :)
I can confirm black images only happen on 768 models for 2.1 and 2.0. 512 models doesn't produce black images except maybe for GTX 10xx models like before. I really don't have to use --no-half before and I probably can't since I only have 4GB ram. Well I can if I use --lowvram but yeah, I really don't have to before on pre 2.0 models.
we
You could try setting the following environment variable.
STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b"
and additionally if you want to use half-precision
ATTN_PRECISION=fp16
So for example for the webui-user.bat
set STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b" set ATTN_PRECISION=fp16
This should checkout the stablediffusion-repository with the specified commit on the next launch. And "8bde0cf64f3735bb33d93bdb8e28120be45c479b" specifically is the commit that adds the ATTN_PRECISION environment variable (see Stability-AI/stablediffusion@8bde0cf).
Works for me, but my local fork is a bit diverged from the current master. So someone should retest this. :)
Where do we put this? STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b"
I'm on a 1060 6GB, and the v2.1 512 model was returning images while the v2.1 768 model needed additional work to not end up blank. Turning xformers back on did allow the 768 model to properly generate an image for me. Considering almost all my VRAM is used while generating, --no-half probably isn't a viable solution without other flags which would slow the process for me.
Summary: xformers makes the 768 model function on my hardware.
Where do we put this? STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b"
In whatever script you use to launch the webui.
So for windows most likely webui-user.bat
, for linux most likely webui-user.sh
.
So the webui-user.bat could look something like this (remember to set your COMMANDLINE_ARGS)
@echo off
set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=your command line options
set STABLE_DIFFUSION_COMMIT_HASH="c12d960d1ee4f9134c2516862ef991ec52d3f59e"
set ATTN_PRECISION=fp16
call webui.bat
Summary: xformers makes the 768 model function on my hardware.
Tried xformers with the 768 model before switchting the commit hash. Which worked fine for lower resolutions, but for unusually large pictures like 1920x1080 i kept consistently getting a black screen. I'm on a RTX 3090.
I did some more testing and I found another way to fix it!
If you enable xformers with
--xformers
, then you don't have to use--no-half
!
Yes, Had the same issue and xformers fixed it.
Results are also oversaturated or deepfried somehow, maybe it's because of v-prediction?
@miguelgargallo Adding
--no-half
isn't really a PR worthy fix as it should work without that argument.
Any given code to any file that fixes the project, it is sufficient to PR, And i also argument and super document all the steps
If you have an AMD card you can't use xformers and full precision will just run out of memory when doing 768x768, even though I have 16gb vram.
I can't find any usage of ATTN_PRECISION in code with the commit hash mentioned above. Their latest commit does have some code related to it though (c12d960d1ee4f9134c2516862ef991ec52d3f59e)
However even after using the latest version and setting this to fp16 I still get black images.
I can't find any usage of ATTN_PRECISION in code with the commit hash mentioned above. Their latest commit does have some code related to it though (c12d960d1ee4f9134c2516862ef991ec52d3f59e)
you meant this commit with usage of ATTN_PRECISION? https://github.com/Stability-AI/stablediffusion/commit/e1797ae248408ea47561eeb8755737f1e35784f2
@RainfoxAri listed the example here in the wiki. Right or wrong? does it need that commit hash to work properly? It is confusing to those wanting to run in fp16 mode without --xformers.
--xformers does not work for me at all; it crashes with
NotImplementedError: Could not run 'xformers::efficient_attention_forward_cutlass' with arguments from the 'CUDA' backend.
however, even not putting "--xformers" in doesn't work, I have to pip uninstall it. So there needs to be some code cleanup on this front. --no-half on the other hand works fine for me.
@OWKenobi I get this same error, it's very frustrating! See issue #5427 for more info (for you and others), but there doesn't seem to be a solution for now.
I have spent the last 12 hours trying to recompile xformers because mine got zapped. On 1.5 I was done in 25m now all kinds of hell so I just said to hell with it only to find that 2.1 gives my 1060 6gb a solid black 768x768 image without xformers. Since doing --xformers does NOT work for my Pascal, since day one it was introduced, I decided to ditch it only to get to this issue.
Has this been fixed? Still getting black images with 768px 2.1, I can't use no half so looking for another way.
Has this been fixed? Still getting black images with 768px 2.1, I can't use no half so looking for another way.
Either you use xformers or you use --no-half or fall back to 2.0. Xformers is becoming mandatory from 2.1 onwards, I believe they said (or the no-half). They may change that but even my 1060 can do fp32 albeit the 6 gig ram issue.
Xformers is becoming mandatory from 2.1 onwards, I believe they said (or the no-half). They may change that but even my 1060 can do fp32 albeit the 6 gig ram issue.
Seems a bit of an odd decisions given that xformers is nvidia only.
Xformers is becoming mandatory from 2.1 onwards, I believe they said (or the no-half). They may change that but even my 1060 can do fp32 albeit the 6 gig ram issue.
Seems a bit of an odd decisions given that xformers is nvidia only.
Hence the --no-half flag which I believe AMD can do. Personally, my hopes are that RDNA4 swings this around, so we no longer need Nvidia and its BS. CUDA is the only reason I stay with NVIDIA.
Closing as stale.