LIGA-Stereo icon indicating copy to clipboard operation
LIGA-Stereo copied to clipboard

GPU memory usage

Open revisitq opened this issue 3 years ago • 10 comments

The GPU memory usage reported in your paper is about 10G, but the GPU memory usage on my machine is about 18G when I train the model. Is there some different setting in the repo with your paper? image

revisitq avatar Nov 05 '21 12:11 revisitq

The validation memory usage is about 7G and the SECOND is not loaded during validation. image

revisitq avatar Nov 05 '21 13:11 revisitq

Could you try run distributed training using only 1 gpu? The reason might be load the model in a single GPU multiple times.

xy-guo avatar Nov 07 '21 14:11 xy-guo

Make sure you run the code using the script given in README

xy-guo avatar Nov 07 '21 14:11 xy-guo

Make sure you run the code using the script given in README

Thanks for your reply. I have try training with only 1 GPU by the command CUDA_VISIBLE_DEVICES='1' ./scripts/dist_train.sh 1 dev configs/stereo/kitti_models/liga.3d-and-bev.yaml, the GPU memory usage is still same. And here is the log. log_train.txt

revisitq avatar Nov 08 '21 01:11 revisitq

If you train on multiple GPUs, are GPU memory usage roughly the same for every GPU? My model is trained on TiTAN X, and its memory is only 12 GB. Maybe you can print out real GPU memory assumption using pytorch APIs, sometimes pytorch will allocate more GPU than needed.

xy-guo avatar Nov 10 '21 01:11 xy-guo

If you train on multiple GPUs, are GPU memory usage roughly the same for every GPU? My model is trained on TiTAN X, and its memory is only 12 GB. Maybe you can print out real GPU memory assumption using pytorch APIs, sometimes pytorch will allocate more GPU than needed.

Actually the memory allocated is about 10G, but I don't know why the GPU memory usage is about 18G.

revisitq avatar Nov 10 '21 04:11 revisitq

If you train on multiple GPUs, are GPU memory usage roughly the same for every GPU? My model is trained on TiTAN X, and its memory is only 12 GB. Maybe you can print out real GPU memory assumption using pytorch APIs, sometimes pytorch will allocate more GPU than needed.

Actually the memory allocated is about 10G, but I don't know why the GPU memory usage is about 18G.

When training on multi-gpus, the gpu memory usage is same for every GPU. image

revisitq avatar Nov 10 '21 05:11 revisitq

Maybe pytorch will pre-allocate GPU memory for future usage, which will not be freed automatically. Potential solutions include explicitly limiting GPU memory usage or torch.cuda.empty_cache() to free the cache.

xy-guo avatar Nov 12 '21 01:11 xy-guo

empty_cache

Thanks for help. I tried torch.cuda.empty_cache() but not working. I am looking for another solution.

revisitq avatar Nov 12 '21 02:11 revisitq

你好,能问一下,gpu oom的问题解决了吗

zcspike avatar Apr 23 '23 14:04 zcspike