stablediffusion-infinity
stablediffusion-infinity copied to clipboard
The result is a black square
I wanna draw something,but when I run the app,the result is a black square。 And no suggestions In Terminal,so I didn't know how to solve the problem.
If you are getting black square, most likely it's NSFW image. You can try to change your prompt and try!
If you are getting black square, most likely it's NSFW image. You can try to change your prompt and try!
@x18-1
The false positive rate of safety checker
/NSFW detector
is relatively high for this project. So you can also disable the safety checker
if you keep getting black square.
@lkwq007
I try ,but it didn’t work
@lkwq007
I try ,but it didn’t work
@lkwq007 I try ,but it didn’t work
@lkwq007
That's exactly what I did,but didn‘t work
the photo is about app
the photo is about Ternimal
If you are getting black square, most likely it's NSFW image. You can try to change your prompt and try!
"Potential NSFW content was detected in one or more images. A black image will be returned instead. Try again with a different prompt and/or seed." appears in my terminal。But I have no idea how to do
Same issue here...Safety unchecked. All the other stable diffusion stuff I've used hates my GTX 1660 and wants precision set to full to show anything but a blank or green square. Wondering if that's the case here, but I'm still a pleb and need guide on how to change this... Not seen anything about it for infinity yet.
I too have a GTX 1660 Ti. At first I had the same issue in Stable Diffusion from Automatic 1111; however, after I forced FULL PRECISION via CMD LINE with "--opt-split-attention --precision full --no-half " and could generate 16 batches of 4 images IF I add "--medvram --force-enable-xformers. Is there a way to FORCE FULL PRECISION in Infinity?
I too have a GTX 1660 Ti. At first I had the same issue in Stable Diffusion from Automatic 1111; however, after I forced FULL PRECISION via CMD LINE with "--opt-split-attention --precision full --no-half " and could generate 16 batches of 4 images IF I add "--medvram --force-enable-xformers. Is there a way to FORCE FULL PRECISION in Infinity?
@vlsech
The latest version can run with python app.py --fp32 --lowvram
I too have a GTX 1660 Ti. At first I had the same issue in Stable Diffusion from Automatic 1111; however, after I forced FULL PRECISION via CMD LINE with "--opt-split-attention --precision full --no-half " and could generate 16 batches of 4 images IF I add "--medvram --force-enable-xformers. Is there a way to FORCE FULL PRECISION in Infinity?
@vlsech The latest version can run with
python app.py --fp32 --lowvram
Thanks, just tested.
Am seeing "CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 6.00 GiB total capacity; 5.31 GiB already allocated; 0 bytes free; 5.32 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF" Any switches to prevent memory fragmentation? Lastly the NVIDIA, the xformers drivers can provide a 30% speed boost to boot. Thanks.
I tried python app.py --fp32 --lowvram
and got the same kind of error:
CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 3.41 GiB already allocated; 0 bytes free; 3.46 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
So I tried using PYTORCH_CUDA_ALLOC_CONF
to set garbage collection and threshold options. I tried first just setting garbage_collection_threshold:0.6
. I upped it to 0.8
, neither fixed my problem. I then added max_split_size_mb:128
and tried the values 64
, 128
, 256
. None resolved the problem.
Now I have gotten a Stable Diffusion fork to work. I can make Automatic1111's fork work if I invoke their app with the following parameters:
--lowvram --precision full --no-half --always-batch-cond-uncond --unload-gfpgan --opt-split-attention
Without all of the parameters, I would either run out of VRAM, or get black/green blocks (unrelated to NSFW filters).
@lkwq007, Infinity looks so promising, hoping you may look at offering more low vram options for us underpowered users such as listed here https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Troubleshooting