BEVFormer
BEVFormer copied to clipboard
out of memory
I got 2 gpus on my server,I run the train as ./tools/dist_train.sh ./projects/configs/bevformer/bevformer_base.py 2. It says that CUDA out of memory. Tried to allocate 252.00 MiB (GPU 0; 23.69 GiB total capacity; 267.51 MiB already allocated; 195.12 MiB free; 282.00 MiB reserved in total by PyTorch) It seems that it didn't use 2gpus. Iturned "2" into "3",it goes: RuntimeError: CUDA error: out of memory CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect. Has everyone ever met this?
Download cudnn and give it a try