stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: Got black image when trying to use the SD model 2.1

Open ruradium opened this issue 2 years ago • 21 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

Got black image when trying to use the latest SD 2.1 model, even though I copy the v2-inference-v.yaml file and rename to the [model-name].yaml

Steps to reproduce the problem

as discribed above

What should have happened?

should generate image as prompted

Commit where the problem happens

44c46f0ed395967cd3830dd481a2db759fda5b3b

What platforms do you use to access UI ?

Linux

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--api --listen --no-half-vae

Additional information, context and logs

No response

ruradium avatar Dec 07 '22 16:12 ruradium

If you use --no-half it will work, but then it also requires a lot more VRAM to generate larger images.

ProGamerGov avatar Dec 07 '22 16:12 ProGamerGov

Same issue here, with Windows 10. :-(

bsalberto77 avatar Dec 07 '22 17:12 bsalberto77

--no-half --no-half-vae --api --listen works for me...but...

https://github.com/Stability-AI/stablediffusion/commit/c12d960d1ee4f9134c2516862ef991ec52d3f59e seems relevant. we may need to export some environment variable to enable fp16 for 2.1

nousr avatar Dec 07 '22 17:12 nousr

Use the v2-inference-v.yaml mentioned above. Use this file for the 768 model only, and the https://github.com/Stability-AI/stablediffusion/blob/main/configs/stable-diffusion/v2-inference.yaml (without -v) for the 512 model. Copy it besides your checkpoint file and give it the same name but with yaml extension.

djdookie avatar Dec 07 '22 18:12 djdookie

Theoretically there shouldn't be an issue with using SD 2.1 if SD 2.0 already worked without --no-half , so I'm not sure why its broken.

ProGamerGov avatar Dec 07 '22 18:12 ProGamerGov

Solution here #5506

miguelgargallo avatar Dec 07 '22 18:12 miguelgargallo

@miguelgargallo Adding --no-half isn't really a PR worthy fix as it should work without that argument.

ProGamerGov avatar Dec 07 '22 18:12 ProGamerGov

I did some more testing and I found another way to fix it!

If you enable xformers with --xformers, then you don't have to use --no-half!

ProGamerGov avatar Dec 07 '22 19:12 ProGamerGov

You could try setting the following environment variable.

STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b"

and additionally if you want to use half-precision

ATTN_PRECISION=fp16

So for example for the webui-user.bat

set STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b"
set ATTN_PRECISION=fp16

This should checkout the stablediffusion-repository with the specified commit on the next launch. And "8bde0cf64f3735bb33d93bdb8e28120be45c479b" specifically is the commit that adds the ATTN_PRECISION environment variable (see https://github.com/Stability-AI/stablediffusion/commit/8bde0cf64f3735bb33d93bdb8e28120be45c479b).

Works for me, but my local fork is a bit diverged from the current master. So someone should retest this. :)

RainfoxAri avatar Dec 07 '22 19:12 RainfoxAri

I can confirm black images only happen on 768 models for 2.1 and 2.0. 512 models doesn't produce black images except maybe for GTX 10xx models like before. I really don't have to use --no-half before and I probably can't since I only have 4GB ram. Well I can if I use --lowvram but yeah, I really don't have to before on pre 2.0 models.

candymint23 avatar Dec 07 '22 19:12 candymint23

we

You could try setting the following environment variable.

STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b"

and additionally if you want to use half-precision

ATTN_PRECISION=fp16

So for example for the webui-user.bat

set STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b"
set ATTN_PRECISION=fp16

This should checkout the stablediffusion-repository with the specified commit on the next launch. And "8bde0cf64f3735bb33d93bdb8e28120be45c479b" specifically is the commit that adds the ATTN_PRECISION environment variable (see Stability-AI/stablediffusion@8bde0cf).

Works for me, but my local fork is a bit diverged from the current master. So someone should retest this. :)

Where do we put this? STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b"

candymint23 avatar Dec 07 '22 19:12 candymint23

I'm on a 1060 6GB, and the v2.1 512 model was returning images while the v2.1 768 model needed additional work to not end up blank. Turning xformers back on did allow the 768 model to properly generate an image for me. Considering almost all my VRAM is used while generating, --no-half probably isn't a viable solution without other flags which would slow the process for me.

Summary: xformers makes the 768 model function on my hardware.

MegaScience avatar Dec 07 '22 19:12 MegaScience

Where do we put this? STABLE_DIFFUSION_COMMIT_HASH="8bde0cf64f3735bb33d93bdb8e28120be45c479b"

In whatever script you use to launch the webui. So for windows most likely webui-user.bat, for linux most likely webui-user.sh.

So the webui-user.bat could look something like this (remember to set your COMMANDLINE_ARGS)

@echo off

set PYTHON=
set GIT=
set VENV_DIR=
set COMMANDLINE_ARGS=your command line options
set STABLE_DIFFUSION_COMMIT_HASH="c12d960d1ee4f9134c2516862ef991ec52d3f59e"
set ATTN_PRECISION=fp16

call webui.bat

Summary: xformers makes the 768 model function on my hardware.

Tried xformers with the 768 model before switchting the commit hash. Which worked fine for lower resolutions, but for unusually large pictures like 1920x1080 i kept consistently getting a black screen. I'm on a RTX 3090.

RainfoxAri avatar Dec 07 '22 19:12 RainfoxAri

I did some more testing and I found another way to fix it!

If you enable xformers with --xformers, then you don't have to use --no-half!

Yes, Had the same issue and xformers fixed it.

curtwagner1984 avatar Dec 07 '22 19:12 curtwagner1984

Results are also oversaturated or deepfried somehow, maybe it's because of v-prediction?

candymint23 avatar Dec 07 '22 19:12 candymint23

@miguelgargallo Adding --no-half isn't really a PR worthy fix as it should work without that argument.

Any given code to any file that fixes the project, it is sufficient to PR, And i also argument and super document all the steps

miguelgargallo avatar Dec 07 '22 20:12 miguelgargallo

If you have an AMD card you can't use xformers and full precision will just run out of memory when doing 768x768, even though I have 16gb vram.

CapsAdmin avatar Dec 08 '22 01:12 CapsAdmin

I can't find any usage of ATTN_PRECISION in code with the commit hash mentioned above. Their latest commit does have some code related to it though (c12d960d1ee4f9134c2516862ef991ec52d3f59e)

However even after using the latest version and setting this to fp16 I still get black images.

CapsAdmin avatar Dec 08 '22 01:12 CapsAdmin

I can't find any usage of ATTN_PRECISION in code with the commit hash mentioned above. Their latest commit does have some code related to it though (c12d960d1ee4f9134c2516862ef991ec52d3f59e)

you meant this commit with usage of ATTN_PRECISION? https://github.com/Stability-AI/stablediffusion/commit/e1797ae248408ea47561eeb8755737f1e35784f2

fractal-fumbler avatar Dec 08 '22 16:12 fractal-fumbler

@RainfoxAri listed the example here in the wiki. Right or wrong? does it need that commit hash to work properly? It is confusing to those wanting to run in fp16 mode without --xformers.

ClashSAN avatar Dec 10 '22 20:12 ClashSAN

--xformers does not work for me at all; it crashes with

  NotImplementedError: Could not run 'xformers::efficient_attention_forward_cutlass' with arguments from the 'CUDA' backend. 

however, even not putting "--xformers" in doesn't work, I have to pip uninstall it. So there needs to be some code cleanup on this front. --no-half on the other hand works fine for me.

OWKenobi avatar Dec 13 '22 10:12 OWKenobi

@OWKenobi I get this same error, it's very frustrating! See issue #5427 for more info (for you and others), but there doesn't seem to be a solution for now.

asimard1 avatar Dec 14 '22 04:12 asimard1

I have spent the last 12 hours trying to recompile xformers because mine got zapped. On 1.5 I was done in 25m now all kinds of hell so I just said to hell with it only to find that 2.1 gives my 1060 6gb a solid black 768x768 image without xformers. Since doing --xformers does NOT work for my Pascal, since day one it was introduced, I decided to ditch it only to get to this issue.

DarkAlchy avatar Dec 18 '22 13:12 DarkAlchy

Has this been fixed? Still getting black images with 768px 2.1, I can't use no half so looking for another way.

Straafe avatar Jan 19 '23 22:01 Straafe

Has this been fixed? Still getting black images with 768px 2.1, I can't use no half so looking for another way.

Either you use xformers or you use --no-half or fall back to 2.0. Xformers is becoming mandatory from 2.1 onwards, I believe they said (or the no-half). They may change that but even my 1060 can do fp32 albeit the 6 gig ram issue.

DarkAlchy avatar Jan 20 '23 00:01 DarkAlchy

Xformers is becoming mandatory from 2.1 onwards, I believe they said (or the no-half). They may change that but even my 1060 can do fp32 albeit the 6 gig ram issue.

Seems a bit of an odd decisions given that xformers is nvidia only.

CapsAdmin avatar Jan 20 '23 09:01 CapsAdmin

Xformers is becoming mandatory from 2.1 onwards, I believe they said (or the no-half). They may change that but even my 1060 can do fp32 albeit the 6 gig ram issue.

Seems a bit of an odd decisions given that xformers is nvidia only.

Hence the --no-half flag which I believe AMD can do. Personally, my hopes are that RDNA4 swings this around, so we no longer need Nvidia and its BS. CUDA is the only reason I stay with NVIDIA.

DarkAlchy avatar Jan 20 '23 15:01 DarkAlchy

Closing as stale.

catboxanon avatar Aug 03 '23 19:08 catboxanon