finetune-SAM
finetune-SAM copied to clipboard
GPU allocated
Thank you for your excellent work on the paper. I encountered a CUDA out-of-memory error with a batch size of 2 on an A100 GPU. I noticed in Table 3 that the GPU memory allocated is significantly larger than the number of trainable parameters. Could you please tell me why there is such a difference? Thank you.