Open3D-ML icon indicating copy to clipboard operation
Open3D-ML copied to clipboard

How to handle CUDA memory allocation issue

Open leviskim17 opened this issue 3 years ago • 1 comments

Checklist

My Question

  • Description Dataset: Toronto3D An error occurred while executing the following: python3 scripts/run_pipeline.py torch -c ml3d/configs/randlanet_toronto3d.yml --dataset.dataset_path /home/kim/Open3D-ML/data/Toronto_3D --pipeline SemanticSegmentation --dataset.use_cache True The error is related: RuntimeError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 5.80 GiB total capacity; 4.30 GiB already allocated; 3.94 MiB free; 4.31 GiB reserved in total by PyTorch)

Actually, the example, semantickitti in https://github.com/isl-org/Open3D-ML/tree/master/scripts is too big for my HDD, I choose Toronto3D. Is there a possible way to handle the memory allocation issue or Is there a smaller size dataset than Toronto3D?

leviskim17 avatar May 03 '22 17:05 leviskim17

When I tested tf with python3 scripts/run_pipeline.py torch -c ml3d/configs/randlanet_toronto3d.yml --dataset.dataset_path /home/kim/Open3D-ML/data/Toronto_3D --pipeline SemanticSegmentation --dataset.use_cache True, it is dead on 86% of the first epoch out of 200. The output message is "DEAD"

leviskim17 avatar May 03 '22 18:05 leviskim17