Out of memory while trying to allocate 58796148776 bytes during training
Hi, I have a Out of GPU memory when trying to train 360 data such as Bonsai, stump. The command is "python -m train --gin_configs=configs/360.gin --gin_bindings="Config.data_dir = '${DATA_DIR}'" --gin_bindings="Config.checkpoint_dir = '${DATA_DIR}/checkpoints'" --logtostderr"
Running on wsl (utuntu 20.04)
The output error is below.
Any suggestion? thanks!
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/ck/miniconda3/envs/multinerf/lib/python3.9/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/ck/miniconda3/envs/multinerf/lib/python3.9/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/mnt/c/gitcode/multinerf/train.py", line 288, in
I had the same issue. Reduced batch_size to match available GPU memory helped in my case. See similar ticket here. Also check README OOM Error section.