nerf
nerf copied to clipboard
How much memory is required to replicate the paper result?
You said you can train a NeRF using only one GPU.
I run your code by llff_config.txt on a Tesla V100 GPU with 16G memory. And it runs out of memory.
How much memory is required to run with your paper configs? And is there any method to train with multi-gpu if I don't have that much memory on a single GPU?
The code should not use more than 16G of memory and should not have OOM errors for a single V100 GPU. We used V100s for the paper results. Is there another process already on the GPU?