latent-diffusion
latent-diffusion copied to clipboard
Run inference on CPU
Thanks a lot for the great library everybody! At the moment the inference function:
python scripts/txt2img.py --prompt "a virus monster is playing guitar, oil on canvas" --ddim_eta 0.0 --n_samples 4 --n_iter 4 --scale 5.0 --ddim_steps 50
is somewhat hardcoded to CUDA - i.e. it's not possible to run the inference just on CPU.
I've hacked a bit into the code and replaced all device="cuda"
statements with device="cpu"
and the inference is actually very much feasible on CPU (takes 50 seconds for a generation).
Could we mabye make the code also work with CPU?
Totally agree, this is somehow a political issue in terms of "GPU social class".
Slower is always better than non-feasible.
in txt2img.py:
move the following under imports so it populates device variable in function device = "cpu" # torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") print(device)
and change this
'# model.cuda() model.to(device)
make changes from cuda to cpu in: ldm\modules\encoders\modules.py`