AnyDoor
AnyDoor copied to clipboard
HOW Many GPU FOR TEST DEMO
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 58.00 MiB. GPU 0 has a total capacty of 15.70 GiB of which 53.31 MiB is free. Including non-PyTorch memory, this process has 15.07 GiB memory in use. Of the allocated memory 14.65 GiB is allocated by PyTorch, and 196.92 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
I also couldn't test it with a 16gb vram on colab.
Use "Pruned model - 4.57 GB" I am running it on a GTX 1070 8GB VRAM GPU (utilizing 2GB from Shared Memory - Total 10GB)
While inferencing on A10 GPU it loaded like ~18 GB of VRAM and ~15 GB of RAM.
3090 24G could run the inference script
Use "Pruned model - 4.57 GB" I am running it on a GTX 1070 8GB VRAM GPU (utilizing 2GB from Shared Memory - Total 10GB)
How do I need to modify the configuration file to use Anytool's pruned version of the model when downloading it?