stablediffusion-infinity icon indicating copy to clipboard operation
stablediffusion-infinity copied to clipboard

The result is a black square

Open x18-1 opened this issue 2 years ago • 12 comments

I wanna draw something,but when I run the app,the result is a black square。 And no suggestions In Terminal,so I didn't know how to solve the problem.

x18-1 avatar Oct 03 '22 12:10 x18-1

If you are getting black square, most likely it's NSFW image. You can try to change your prompt and try!

amrrs avatar Oct 03 '22 12:10 amrrs

If you are getting black square, most likely it's NSFW image. You can try to change your prompt and try!

@x18-1 The false positive rate of safety checker/NSFW detector is relatively high for this project. So you can also disable the safety checker if you keep getting black square.

lkwq007 avatar Oct 03 '22 13:10 lkwq007

@lkwq007

I try ,but it didn’t work

x18-1 avatar Oct 04 '22 14:10 x18-1

@lkwq007

I try ,but it didn’t work

image

lkwq007 avatar Oct 04 '22 15:10 lkwq007

@lkwq007 I try ,but it didn’t work

image

@lkwq007

That's exactly what I did,but didn‘t work

the photo is about app aaaa

the photo is about Ternimal Snipaste_2022-10-05_10-34-46

x18-1 avatar Oct 05 '22 02:10 x18-1

If you are getting black square, most likely it's NSFW image. You can try to change your prompt and try!

"Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed." appears in my terminal。But I have no idea how to do

x18-1 avatar Oct 05 '22 02:10 x18-1

Same issue here...Safety unchecked. All the other stable diffusion stuff I've used hates my GTX 1660 and wants precision set to full to show anything but a blank or green square. Wondering if that's the case here, but I'm still a pleb and need guide on how to change this... Not seen anything about it for infinity yet.

Eprise1701e avatar Oct 18 '22 01:10 Eprise1701e

I too have a GTX 1660 Ti. At first I had the same issue in Stable Diffusion from Automatic 1111; however, after I forced FULL PRECISION via CMD LINE with "--opt-split-attention --precision full --no-half " and could generate 16 batches of 4 images IF I add "--medvram --force-enable-xformers. Is there a way to FORCE FULL PRECISION in Infinity?

vlsech avatar Oct 21 '22 04:10 vlsech

I too have a GTX 1660 Ti. At first I had the same issue in Stable Diffusion from Automatic 1111; however, after I forced FULL PRECISION via CMD LINE with "--opt-split-attention --precision full --no-half " and could generate 16 batches of 4 images IF I add "--medvram --force-enable-xformers. Is there a way to FORCE FULL PRECISION in Infinity?

@vlsech The latest version can run with python app.py --fp32 --lowvram

lkwq007 avatar Oct 22 '22 04:10 lkwq007

I too have a GTX 1660 Ti. At first I had the same issue in Stable Diffusion from Automatic 1111; however, after I forced FULL PRECISION via CMD LINE with "--opt-split-attention --precision full --no-half " and could generate 16 batches of 4 images IF I add "--medvram --force-enable-xformers. Is there a way to FORCE FULL PRECISION in Infinity?

@vlsech The latest version can run with python app.py --fp32 --lowvram

Thanks, just tested.

Am seeing "CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 6.00 GiB total capacity; 5.31 GiB already allocated; 0 bytes free; 5.32 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF" Any switches to prevent memory fragmentation? Lastly the NVIDIA, the xformers drivers can provide a 30% speed boost to boot. Thanks.

vlsech avatar Oct 22 '22 13:10 vlsech

I tried python app.py --fp32 --lowvram and got the same kind of error:

CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 3.41 GiB already allocated; 0 bytes free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

So I tried using PYTORCH_CUDA_ALLOC_CONF to set garbage collection and threshold options. I tried first just setting garbage_collection_threshold:0.6. I upped it to 0.8, neither fixed my problem. I then added max_split_size_mb:128 and tried the values 64, 128, 256. None resolved the problem.

Now I have gotten a Stable Diffusion fork to work. I can make Automatic1111's fork work if I invoke their app with the following parameters:

--lowvram --precision full --no-half --always-batch-cond-uncond --unload-gfpgan --opt-split-attention

Without all of the parameters, I would either run out of VRAM, or get black/green blocks (unrelated to NSFW filters).

jdries3 avatar Oct 24 '22 07:10 jdries3

@lkwq007, Infinity looks so promising, hoping you may look at offering more low vram options for us underpowered users such as listed here https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Troubleshooting

vlsech avatar Oct 25 '22 02:10 vlsech