TensoRF
TensoRF copied to clipboard
Memory Required for Training
How much GPU memory is required for training? I am using RTX 2080, 11GB. I tried to train on the lego dataset using the config file provided, and I get a memory error.
- What parameters need to be changed in the config file to train on low memory?
Having similar issues on an RTX 2070 8GB, what params can be modified to lower GPU memory requirements?
Thanks!
I reduced the batch_size from 4096 to 1024 and it just worked
@apchenstu I was unable to train it on RTX 2080, 11GB. I then tried training it on Quadro RTX 8000 (48GB) and it took around 20GB of memory on this single GPU. Using the same dataset on the same GPU, I trained Plenoxels and it took around 14GB of memory.
I have tried both methods with different datasets/scans and TensoRF takes more memory for each dataset. The paper shows the complete opposite. Although the final model size is significantly smaller than that of Plenoxels.
I am using the following configuration for training:
n_iters = 30000 batch_size = 4096
N_voxel_init = 2097156 # 1283 N_voxel_final = 27000000 # 3003 upsamp_list = [2000,3000,4000,5500,7000] update_AlphaMask_list = [2000,4000]
N_vis = 5 vis_every = 10000
render_test = 1
n_lamb_sigma = [16,16,16] n_lamb_sh = [48,48,48] model_name = TensorVMSplit
shadingMode = MLP_Fea fea2denseAct = softplus
view_pe = 2 fea_pe = 2
view_pe = 2 fea_pe = 2
TV_weight_density = 0.1 TV_weight_app = 0.01
rm_weight_mask_thre = 1e-4
Hi, I think it is probably because the number of ray samples scales with the N_voxel_final
that results in high memory cost. You may try to reduce the N_voxel_final
or try to turn down the number of ray samples by setting nSamples
or step_ratio
.