AnyDoor icon indicating copy to clipboard operation
AnyDoor copied to clipboard

HOW Many GPU FOR TEST DEMO

Open anthonyyuan opened this issue 1 year ago • 5 comments

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 58.00 MiB. GPU 0 has a total capacty of 15.70 GiB of which 53.31 MiB is free. Including non-PyTorch memory, this process has 15.07 GiB memory in use. Of the allocated memory 14.65 GiB is allocated by PyTorch, and 196.92 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

anthonyyuan avatar Dec 20 '23 14:12 anthonyyuan

I also couldn't test it with a 16gb vram on colab.

himmetozcan avatar Dec 21 '23 07:12 himmetozcan

Use "Pruned model - 4.57 GB" I am running it on a GTX 1070 8GB VRAM GPU (utilizing 2GB from Shared Memory - Total 10GB)

sohaibhaider avatar Dec 21 '23 12:12 sohaibhaider

While inferencing on A10 GPU it loaded like ~18 GB of VRAM and ~15 GB of RAM.

Eddudos avatar Dec 22 '23 06:12 Eddudos

3090 24G could run the inference script

luccachiang avatar Dec 25 '23 14:12 luccachiang

Use "Pruned model - 4.57 GB" I am running it on a GTX 1070 8GB VRAM GPU (utilizing 2GB from Shared Memory - Total 10GB)

How do I need to modify the configuration file to use Anytool's pruned version of the model when downloading it?

Gooddz1 avatar May 15 '24 03:05 Gooddz1