stable-diffusion icon indicating copy to clipboard operation
stable-diffusion copied to clipboard

Plain green images generated

Open pedropachecog opened this issue 3 years ago • 28 comments

It runs well, but it generates images where all pixels are the same shade of green, specifically #007B00.

In the basujindal repo, changing the default from autocast to full in txt2img.py fixes this problem. I did it in the file in scripts/orig_scripts but it didn't make a difference.

Whenever I run dream.py with --full_precision it runs out of memory. I have 6 GB of VRAM.

pedropachecog avatar Aug 23 '22 16:08 pedropachecog

Use basujindal's optimized script and the switch --precision full, it won't blow up the VRAM usage. Close any applications that have Hardware Acceleration (or disable such as Steam, Discord, Chrome) For the regular stable-diffusion code, try --W 384 --H 384 (lower quality but it will work)

bscout9956 avatar Aug 23 '22 16:08 bscout9956

Thank you so much. I realized I posted this on the wrong repo. I apologize. The solution I described above works for basujindal's repo.

pedropachecog avatar Aug 23 '22 17:08 pedropachecog

just use this https://huggingface.co/spaces/stabilityai/stable-diffusion

breadbrowser avatar Aug 23 '22 19:08 breadbrowser

I have the same problem, I tried every file .ckpt, 6 GB of VRAM (NVIDIA 1660 Super) Now after reinstalling everything the generated pictures are black...

Edit: Back to green again

Iustin117 avatar Aug 24 '22 10:08 Iustin117

The issue is still valid but I'm afraid it's related to pytorch... I have seen comments on line about the 1660 getting NaNs using fp16..

bscout9956 avatar Aug 24 '22 11:08 bscout9956

You can debug this issue by checking the output of each step, likely a NaN issue from fp16 (which you can resolve by switching to fp32) or some weights are not initialized properly (happens to me once)

taoisu avatar Aug 25 '22 00:08 taoisu

I have the same problem, I have a laptop with a 4GB 1650 GTX. I got black images at first and green images now, after a few changes. I have tried --precision full but I only get out of memory error.

Adolfovik avatar Aug 25 '22 08:08 Adolfovik

For me, it finally worked by adding this argument: "--precision full".

Iustin117 avatar Aug 25 '22 14:08 Iustin117

Finally also worked for me with --precision full. But I have to close the browser (Mozilla), seems to be that I am very short in memory (8GB RAM and 4GB GPU). Only can get one image each time if I want not go out of memory, but it's fine. I still can not believe this impressive IA implementation, it's almost magic!

Adolfovik avatar Aug 26 '22 16:08 Adolfovik

im also on 1660 Super and getting green image

migero avatar Sep 02 '22 16:09 migero

Im too on 1660 super and using "webui.cmd --precision full" does not work... any other clue?

jorgitobg avatar Sep 02 '22 17:09 jorgitobg

I'm also on 1660 super and getting green image. Wasn't fixed with --precision full (also had to modify line 281 in txt2img.py: precision_scope = autocast if opt.precision=="full" else nullcontext).

baobabKoodaa avatar Sep 04 '22 15:09 baobabKoodaa

Hi! after changing txt2img.py the desktop interface works? im still getting green images...

jorgitobg avatar Sep 05 '22 11:09 jorgitobg

I'm also on 1660 super and getting green image.

ccimage avatar Sep 06 '22 09:09 ccimage

What prompt did you use? I'm on the same hardware and trying to get only one image, I still get the memory error. Thanks!

CptTony avatar Sep 07 '22 18:09 CptTony

I reduced memory usage like this:

  • scripts/txt2img.py, function - load_model_from_config, line - 63, change from: model.cuda() to model.cuda().half()
  • removed invisible watermarking
  • reduced n_samples to 1
  • reduced resolution to 256x256
  • removed sfw filter

baobabKoodaa avatar Sep 07 '22 18:09 baobabKoodaa

I reduced memory usage like this:

  • scripts/txt2img.py, function - load_model_from_config, line - 63, change from: model.cuda() to model.cuda().half()
  • removed invisible watermarking
  • reduced n_samples to 1
  • reduced resolution to 256x256
  • removed sfw filter

Is what you've listed as bullet points the result of switching the one line? If not, how did you manage to change those attributes @baobabKoodaa?

ZenaMel avatar Sep 08 '22 10:09 ZenaMel

Is what you've listed as bullet points the result of switching the one line? If not, how did you manage to change those attributes @baobabKoodaa?

I modified the txt2img script in multiple places. You can just ctrl+f the file for stuff related to watermarking and comment it out. Same thing for SFW filter. Remember to comment out unused imports after commenting out code.

baobabKoodaa avatar Sep 08 '22 11:09 baobabKoodaa

Also on a 1660 Ti and only getting plain green images.

nibblesnbits avatar Sep 25 '22 18:09 nibblesnbits

Solution for 16xx card owners, which is worked for me:

  1. Download cudnn libraries from NVIDIA site, version > 8.2.0 (I have tested 8.5.0.96 and 8.3.3.40)
  2. Place them into your torch installation: conda\envs\ldm\Lib\site-packages\torch\lib
  3. Place missing dependency zlibwapi.dll to the same folder -or-
  4. Update torch to version including new cundnn : e.g. torch==1.12.0+cu116

After that you should get black image instead of green, that mean you are on the right way Add following lines to txt2img:

torch.backends.cudnn.benchmark = True torch.backends.cudnn.enabled = True

After that you should get normal images, not green and not black

ArDiouscuros avatar Sep 28 '22 10:09 ArDiouscuros

Solution for 16xx card owners, which is worked for me:

  1. Download cudnn libraries from NVIDIA site, version > 8.2.0 (I have tested 8.5.0.96 and 8.3.3.40)
  2. Place them into your torch installation: conda\envs\ldm\Lib\site-packages\torch\lib
  3. Place missing dependency zlibwapi.dll to the same folder -or-
  4. Update torch to version including new cundnn : e.g. torch==1.12.0+cu116

After that you should get black image instead of green, that mean you are on the right way Add following lines to txt2img:

torch.backends.cudnn.benchmark = True torch.backends.cudnn.enabled = True

After that you should get normal images, not green and not black

This works, thank you!

leonidwang avatar Oct 05 '22 08:10 leonidwang

Solution for 16xx card owners, which is worked for me:

  1. Download cudnn libraries from NVIDIA site, version > 8.2.0 (I have tested 8.5.0.96 and 8.3.3.40)
  2. Place them into your torch installation: conda\envs\ldm\Lib\site-packages\torch\lib
  3. Place missing dependency zlibwapi.dll to the same folder -or-
  4. Update torch to version including new cundnn : e.g. torch==1.12.0+cu116

After that you should get black image instead of green, that mean you are on the right way Add following lines to txt2img:

torch.backends.cudnn.benchmark = True torch.backends.cudnn.enabled = True

After that you should get normal images, not green and not black

HEY WHERE DO I ADD STEP 4?

fabsway23 avatar Oct 08 '22 13:10 fabsway23

img2img user here - getting a green output & enabling precision-full text box gives an error - "Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same". Anyone else facing this issue ?

fashiontryon-production avatar Oct 11 '22 21:10 fashiontryon-production

torch.backends.cudnn.benchmark = True torch.backends.cudnn.enabled = True

hey, this is very simple, found the file called txt2img.py. add those two lines at the end of the files, then add

import torch at the begining and you are done. you don't need to use those flags anymore

FVolral avatar Oct 20 '22 22:10 FVolral

  1. Place them into your torch installation: conda\envs\ldm\Lib\site-packages\torch\lib

This means just lib files or dll files and lib files?

If include dll files,do I need to overwrite the original files?

ccccxxx avatar Oct 24 '22 04:10 ccccxxx

I'm having the same issue on training, does anyone now a setting that resolves this? Precision setting is not applicable for training

furkan-celik avatar Jan 22 '23 17:01 furkan-celik

you should not, this is an old issue. update everything and give more details about your environments if it doesn't helps

FVolral avatar Jan 22 '23 23:01 FVolral

I have last version of the stable-diffusion repo. And following their instructions as in setting up environment like this.

conda env create -f environment.yaml conda activate ldm

However when I run any of the given scripts with python main.py --base ./configs/latent-diffusion/.yaml -t --gpus 0, -n "256_stable_diff_4ch" all I get is an image with a color.

I have checked the weights and grads of the model none of them is NA or INF. And I am observing this from the initialization of the model. OpenAI's improved-diffusion repo works fine and score-guided diffusion also works fine but somehow couldn't manage to run stable diffusion.

I am using Nvidia A40 on a server but result is the same on both cpu and gpu runs. Here is my pip list. I also have a dataset of my own where I am using torchvision.datasets.ImageFolder. I have also tried to use CelebA-HQ but the result is the same on both as I said.

absl-py 1.4.0 aiohttp 3.8.3 aiosignal 1.3.1 albumentations 0.4.3 altair 4.2.0 antlr4-python3-runtime 4.8 async-timeout 4.0.2 attrs 22.2.0 backports.zoneinfo 0.2.1 blinker 1.5 brotlipy 0.7.0 cachetools 5.3.0 certifi 2022.12.7 cffi 1.15.1 charset-normalizer 2.0.4 click 8.1.3 clip 1.0 /home/guests/furkan_celik/stable-diffusion/src/clip coloredlogs 15.0.1 cryptography 38.0.4 decorator 5.1.1 diffusers 0.11.1 einops 0.3.0 entrypoints 0.4 filelock 3.9.0 flatbuffers 23.1.21 flit-core 3.6.0 frozenlist 1.3.3 fsspec 2023.1.0 ftfy 6.1.1 future 0.18.3 gitdb 4.0.10 GitPython 3.1.30 google-auth 2.16.0 google-auth-oauthlib 0.4.6 grpcio 1.51.1 huggingface-hub 0.11.1 humanfriendly 10.0 idna 3.4 imageio 2.9.0 imageio-ffmpeg 0.4.2 imgaug 0.2.6 importlib-metadata 6.0.0 importlib-resources 5.10.2 invisible-watermark 0.1.5 Jinja2 3.1.2 jsonschema 4.17.3 kornia 0.6.0 latent-diffusion 0.0.1 /home/guests/furkan_celik/stable-diffusion Markdown 3.4.1 markdown-it-py 2.1.0 MarkupSafe 2.1.2 mdurl 0.1.2 mkl-fft 1.3.1 mkl-random 1.2.2 mkl-service 2.4.0 mpmath 1.2.1 multidict 6.0.4 networkx 3.0 numpy 1.24.1 oauthlib 3.2.2 omegaconf 2.1.1 onnx 1.13.0 onnxruntime 1.13.1 opencv-python 4.1.2.30 opencv-python-headless 4.7.0.68 packaging 23.0 pandas 1.5.3 Pillow 9.3.0 pip 20.3.3 pkgutil-resolve-name 1.3.10 protobuf 3.20.3 pudb 2019.2 pyarrow 10.0.1 pyasn1 0.4.8 pyasn1-modules 0.2.8 pycparser 2.21 pydeck 0.8.0 pyDeprecate 0.3.1 Pygments 2.14.0 Pympler 1.0.1 pyOpenSSL 22.0.0 pyrsistent 0.19.3 PySocks 1.7.1 python-dateutil 2.8.2 pytorch-lightning 1.4.2 pytz 2022.7.1 pytz-deprecation-shim 0.1.0.post0 PyWavelets 1.4.1 PyYAML 6.0 regex 2022.10.31 requests 2.28.1 requests-oauthlib 1.3.1 rich 13.2.0 rsa 4.9 scikit-image 0.19.3 scipy 1.10.0 semver 2.13.0 setuptools 65.6.3 six 1.16.0 smmap 5.0.0 streamlit 1.17.0 sympy 1.11.1 taming-transformers 0.0.1 /home/guests/furkan_celik/stable-diffusion/src/taming-transformers tensorboard 2.11.2 tensorboard-data-server 0.6.1 tensorboard-plugin-wit 1.8.1 test-tube 0.7.5 tifffile 2023.1.23.1 tokenizers 0.12.1 toml 0.10.2 toolz 0.12.0 torch 1.11.0 torch-fidelity 0.3.0 torchmetrics 0.6.0 torchvision 0.12.0 tornado 6.2 tqdm 4.64.1 transformers 4.19.2 typing-extensions 4.4.0 tzdata 2022.7 tzlocal 4.2 urllib3 1.26.14 urwid 2.1.2 validators 0.20.0 watchdog 2.2.1 wcwidth 0.2.6 Werkzeug 2.2.2 wheel 0.37.1 yarl 1.8.2 zipp 3.11.0

furkan-celik avatar Jan 23 '23 21:01 furkan-celik