latent-diffusion
latent-diffusion copied to clipboard
Update txt2img.py for colab
add import path memory optimizations from https://github.com/multimodalart/latent-diffusion-notebook set eta_ddim to 0 for plms
We probably want to keep compatibility for CPU users, can you make the model loading function take the device name (and then only do the .half() if it is on GPU)? Thank you :)
i committed changes to implement this and the cuda works, but i didn't get the cpu to work. seems like there is some cuda hardcoding in ddpm.py which gives:
Traceback (most recent call last): File "scripts/txt2img.py", line 170, in <module> sample() File "scripts/txt2img.py", line 145, in sample uc = model.get_learned_conditioning(opt.n_samples * [""]) File "./ldm/models/diffusion/ddpm.py", line 554, in get_learned_conditioning c = self.cond_stage_model.encode(c) File "./ldm/modules/encoders/modules.py", line 99, in encode return self(text) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "./ldm/modules/encoders/modules.py", line 91, in forward tokens = self.tknz_fn(text)#.to(self.device) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "./ldm/modules/encoders/modules.py", line 62, in forward tokens = batch_encoding["input_ids"].to(self.device) File "/usr/local/lib/python3.7/dist-packages/torch/cuda/__init__.py", line 214, in _lazy_init torch._C._cuda_init() RuntimeError: No CUDA GPUs are available
So i went back to the original CompVis code and it does not work for me without cuda.
Oh :/
I may have to fix that at some point because these models probably are fast enough to run on CPU with PLMS sampling.
/sub, I ran into the same issue over at https://github.com/CompVis/latent-diffusion/issues/118