latent-diffusion icon indicating copy to clipboard operation
latent-diffusion copied to clipboard

Run inference on CPU

Open patrickvonplaten opened this issue 2 years ago • 2 comments

Thanks a lot for the great library everybody! At the moment the inference function:

python scripts/txt2img.py --prompt "a virus monster is playing guitar, oil on canvas" --ddim_eta 0.0 --n_samples 4 --n_iter 4 --scale 5.0  --ddim_steps 50

is somewhat hardcoded to CUDA - i.e. it's not possible to run the inference just on CPU.

I've hacked a bit into the code and replaced all device="cuda" statements with device="cpu" and the inference is actually very much feasible on CPU (takes 50 seconds for a generation).

Could we mabye make the code also work with CPU?

patrickvonplaten avatar May 27 '22 18:05 patrickvonplaten

Totally agree, this is somehow a political issue in terms of "GPU social class".

Slower is always better than non-feasible.

carlitoselmago avatar Jun 14 '22 14:06 carlitoselmago

in txt2img.py:

move the following under imports so it populates device variable in function device = "cpu" # torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") print(device)

and change this

'# model.cuda() model.to(device)

make changes from cuda to cpu in: ldm\modules\encoders\modules.py`

eformx avatar Jun 17 '22 21:06 eformx