3DTrans
3DTrans copied to clipboard
Problem about GPU memory
Hello, thanks for you ADA codebase. I try to train pvrcnn with Bi3D, I use kitti as source domain and a custom dataset in kitti format (smaller than kitti) as target domain. A CUDA out of memory problem occured during Stage 2. I use 6 RTX 2080ti (each has 10 GB memory) and set BATCH_SIZE_PER_GPU to 1. The Discriminator training and active evaluating were both done successfully but the CUDA out of memory problem occured after these. Are there any bug in memory management in this code? Or do I need more memory to train?
Looking forward to your response!