Benjamin Trom
Benjamin Trom
you installed the wrong package: ldm from pypi is not the right package. You should clone this repo https://github.com/CompVis/latent-diffusion and pip install .
@wes-kay I'd advise you to use python 3.8 for deep learning packages (although I don't think it is the issue here). When you open your python console and type `import...
> @limiteinductive It's not installed. As the setup in the latent diffusion doesn't contain the install for the package: https://github.com/CompVis/latent-diffusion/blob/main/setup.py > > "ldm from pypi is not the right package"...
@wes-kay ha ha, don't worry. Yes sure you can install it. If it bugs it means you probably need to install a newer version of pytorch in your ldm env....
@wes-kay bad news: your GPU is doesn't have enough VRAM. You should use a service like google colab, or kaggle notebooks if you wanna run this package
yeah it seems so. You probably won't get a GPU for less than 0.25/hour
If you run without clip_guidance and with batch_size=1 you can make it run with 8.5GB VRAM. The code is not made to run on CPU (it might be possible to...
Yes it needs a lot of vram ; around 32GB if I remember well
Hi @alishan2040, The code to finetune is in the readme, it's expecting pretty high VRAM requirements so I used it only on A100. With one A100 and a batch size...
I'm running some tests myself, so I don't have a definite answer. My theory is that the style of the images generated by glid-3-xl is a very shallow feature. So...