glid-3-xl
glid-3-xl copied to clipboard
How much GPU memory is required?
I have an 11GB Rtx3080TI and it seems to be failling. On CPU, I get the error "RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'". I hope I installed it correctly, I had to install some additional repos like transformers and taming-transformers. This is for the CLIP guidance
absl-py==1.0.0 aiohttp==3.8.1 aiosignal==1.2.0 async-timeout==4.0.2 attrs==21.4.0 blobfile==1.3.0 cachetools==5.0.0 certifi==2021.10.8 charset-normalizer==2.0.12 click==8.1.3 clip==1.0 einops==0.4.1 filelock==3.6.0 frozenlist==1.3.0 fsspec==2022.3.0 ftfy==6.1.1 google-auth==2.6.6 google-auth-oauthlib==0.4.6 grpcio==1.44.0 -e git+https://github.com/Jack000/glid-3-xl@a0b5be4b04378d4d4779240d3e0a599360c1a133#egg=guided_diffusion huggingface-hub==0.5.1 idna==3.3 importlib-metadata==4.11.3 joblib==1.1.0 -e git+https://github.com/CompVis/latent-diffusion.git@5a6571e384f9a9b492bbfaca594a2b00cad55279#egg=latent_diffusion Markdown==3.3.6 multidict==6.0.2 numpy==1.22.3 oauthlib==3.2.0 packaging==21.3 Pillow==9.1.0 protobuf==3.20.1 pyasn1==0.4.8 pyasn1-modules==0.2.8 pycryptodomex==3.14.1 pyDeprecate==0.3.2 pyparsing==3.0.8 PyQt5==5.15.6 PyQt5-Qt5==5.15.2 PyQt5-sip==12.10.1 pytorch-lightning==1.6.2 PyYAML==6.0 regex==2022.4.24 requests==2.27.1 requests-oauthlib==1.3.1 rsa==4.8 sacremoses==0.0.53 six==1.16.0 -e git+https://github.com/CompVis/taming-transformers.git@24268930bf1dce879235a7fddd0b2355b84d7ea6#egg=taming_transformers tensorboard==2.9.0 tensorboard-data-server==0.6.1 tensorboard-plugin-wit==1.8.1 tokenizers==0.12.1 torch==1.11.0+cu113 torchaudio==0.9.0 torchmetrics==0.8.1 torchvision==0.12.0+cu113 tqdm==4.64.0 transformers==4.18.0 typing-extensions==4.2.0 urllib3==1.26.9 wcwidth==0.2.5 Werkzeug==2.1.2 xmltodict==0.12.0 yarl==1.7.2 zipp==3.8.0
My pip freeze
If you run without clip_guidance and with batch_size=1 you can make it run with 8.5GB VRAM. The code is not made to run on CPU (it might be possible to adapt it though). You can keep your VRAM under 11GB with clip_guidance if your replace 'VIT-L/14" by "RN50"
@limiteinductive thanks for saving me the time trying to test out all the other clip models. Any idea if it's easy to implement the sample code with the older VIT-B/32 model? Or will it require manually adjusting each instance of nn.Linear?
The code is not made to run on CPU (it might be possible to adapt it though).
In addition, to optimize this, I find it feasible by adding the "--CPU" argument and adjusting modules.py under the "latent-diffusion\ldm\modules\encoders" directory to have "cpu" for any instance where it might otherwise call "cuda".