DCR icon indicating copy to clipboard operation
DCR copied to clipboard

python diff_inference.py get OutOfMemoryError

Open whybfq opened this issue 1 year ago • 2 comments

I first generated the pictures using the diff_inference.py

python diff_inference.py -nb 4000 --dataset laion --capstyle instancelevel_blip --rand_augs rand_numb_add while I met File "/home/anaconda3/envs/diffrep/lib/python3.9/site-packages/diffusers/models/cross_attention.py", line 314, in call attention_probs = attn.get_attention_scores(query, key, attention_mask) File "/home/anaconda3/envs/diffrep/lib/python3.9/site-packages/diffusers/models/cross_attention.py", line 253, in get_attention_scores attention_probs = attention_scores.softmax(dim=-1) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.16 GiB (GPU 0; 15.46 GiB total capacity; 11.31 GiB already allocated; 2.48 GiB free; 11.39 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

while I still have a lot of gpu, thanks for your suggestion image

​​ ​​

whybfq avatar Apr 17 '24 13:04 whybfq

Is it possible to do a single image inference of stabilityai/stable-diffusion-2-1 model on your GPU? Most of my experiments are conducted on A5000/A6000.

For the case you are testing, it is a simple SD 2.1 inference with modified prompts, you can perhaps change some hyperparameters here to see if the code runs.

somepago avatar Apr 18 '24 19:04 somepago

Thanks for your suggestion, while I set im_batch=1, num_inference_steps=5, nbatches set 4, still have this error image

whybfq avatar Apr 20 '24 06:04 whybfq