Open3D-ML
Open3D-ML copied to clipboard
How to handle CUDA memory allocation issue
Checklist
- [X] I have searched for similar issues.
- [X] I have tested with the latest development wheel.
- [X] I have checked the release documentation and the latest documentation (for
masterbranch).
My Question
- Description
Dataset: Toronto3D
An error occurred while executing the following:
python3 scripts/run_pipeline.py torch -c ml3d/configs/randlanet_toronto3d.yml --dataset.dataset_path /home/kim/Open3D-ML/data/Toronto_3D --pipeline SemanticSegmentation --dataset.use_cache TrueThe error is related:RuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 5.80 GiB total capacity; 4.30 GiB already allocated; 3.94 MiB free; 4.31 GiB reserved in total by PyTorch)
Actually, the example, semantickitti in https://github.com/isl-org/Open3D-ML/tree/master/scripts is too big for my HDD, I choose Toronto3D. Is there a possible way to handle the memory allocation issue or Is there a smaller size dataset than Toronto3D?
When I tested tf with python3 scripts/run_pipeline.py torch -c ml3d/configs/randlanet_toronto3d.yml --dataset.dataset_path /home/kim/Open3D-ML/data/Toronto_3D --pipeline SemanticSegmentation --dataset.use_cache True, it is dead on 86% of the first epoch out of 200. The output message is "DEAD"