OOTDiffusion icon indicating copy to clipboard operation
OOTDiffusion copied to clipboard

torch.cuda out of memory

Open Hankerlove opened this issue 1 year ago • 1 comments

Thanks for Dr.Xu relplying to my previous question, i can inference your briliiant model locally,but i have another quesetion below:

my type of GPU is 3060Laptop,and my inference needs about 25minutes and sometimes get the result 'CUDA out of memory',but i have seen others release installation package with .bat which can directly run inferencing on bilibili. I try his .bat and find it can inference in nearly half of minutes and get a great result,that makes me really wondering

i wonder how can i solve it or the model really needs a higher-capability gpu to make inference

Hankerlove avatar Mar 20 '24 02:03 Hankerlove

Hi. It takes around 3 seconds and 6GB memory for 1 sample and 20 steps on our RTX 4090 GPU. Maybe a 6GB 3060 is not completely enough.

levihsu avatar Mar 20 '24 05:03 levihsu

Hi. It takes around 3 seconds and 6GB memory for 1 sample and 20 steps on our RTX 4090 GPU. Maybe a 6GB 3060 is not completely enough.

I use 3060Ti 16G , same problem

bjbjbjbj avatar May 25 '24 10:05 bjbjbjbj