sdfstudio
sdfstudio copied to clipboard
Why neuralangelo trains and convergence much slower thant neus?
As mentioned in the title, I do these experiment in mipnerf360 bicycle. I know that the numerical gradient will slow down the training, but why the convergence is slow too?
here is the cmd i run for neuralangelo
and neus
. Limited by VRAM, I set --pipeline.model.sdf-field.hash-features-per-level 2 --pipeline.model.sdf-field.log2-hashmap-size 19
for neuralangelo and for fair comparision, the neus is set the same.
# neuralangelo
ns-train neuralangelo --pipeline.model.sdf-field.use-grid-feature True \
--pipeline.model.sdf-field.hidden-dim 256 \
--pipeline.model.sdf-field.num-layers 2 \
--pipeline.model.sdf-field.num-layers-color 2 \
--pipeline.model.sdf-field.use-appearance-embedding False \
--pipeline.model.sdf-field.geometric-init True \
--pipeline.model.sdf-field.inside-outside False \
--pipeline.model.sdf-field.bias 0.5 \
--pipeline.model.sdf-field.beta-init 0.3 \
--pipeline.model.sdf-field.hash-features-per-level 2 \
--pipeline.model.sdf-field.log2-hashmap-size 19 \
--trainer.steps-per-eval-image 1000 \
--pipeline.datamanager.train-num-rays-per-batch 1024 \
--trainer.steps_per_save 10000 --trainer.max_num_iterations 50001 \
--pipeline.model.background-model mlp \
--vis wandb --experiment-name debug mipnerf360-data \
--data data/nerfstudio-data-mipnerf360/bicycle
# neus
ns-train neus --pipeline.model.sdf-field.use-grid-feature True \
--pipeline.model.sdf-field.hidden-dim 256 \
--pipeline.model.sdf-field.num-layers 2 \
--pipeline.model.sdf-field.num-layers-color 2 \
--pipeline.model.sdf-field.use-appearance-embedding False \
--pipeline.model.sdf-field.geometric-init True \
--pipeline.model.sdf-field.inside-outside False \
--pipeline.model.sdf-field.bias 0.5 \
--pipeline.model.sdf-field.beta-init 0.3 \
--pipeline.model.sdf-field.hash-features-per-level 2 \
--pipeline.model.sdf-field.log2-hashmap-size 19 \
--trainer.steps-per-eval-image 1000 \
--pipeline.datamanager.train-num-rays-per-batch 1024 \
--trainer.steps_per_save 10000 --trainer.max_num_iterations 50001 \
--pipeline.model.background-model mlp \
--vis wandb --experiment-name debug \
--machine.num-gpus 2 mipnerf360-data \
--data data/nerfstudio-data-mipnerf360/bicycle
The result of neus in 14k
The result of neuralangelo in 14k
The result of neuralangelo in 26k maybe comparable, but not so good enough
Hi, that's because neuralangelo use a coarse to fine strategy to gradually activate higher level of feature grids so that only the coarse grids are used at the beginning of training.
Btw, I suggest using bakedangelo since it is much faster and has better background modelling.
Thank you for reminding! I ' ll try your advice and thank you for your reply!
@niujinshuchong Sorry for bothering again. I tried bakedangelo as you advised, and i set the scene_scale in mipnerf 360 to make it learn the background mesh. And it only trained for 180k iters because of the the instability of my machine.
ns-train bakedangelo --pipeline.model.sdf-field.use-grid-feature True \
--pipeline.model.sdf-field.hidden-dim 256 \
--pipeline.model.sdf-field.num-layers 2 \
--pipeline.model.sdf-field.num-layers-color 2 \
--pipeline.model.sdf-field.use-appearance-embedding True \
--pipeline.model.sdf-field.geometric-init True \
--pipeline.model.sdf-field.inside-outside False \
--pipeline.model.sdf-field.bias 0.5 \
--pipeline.model.sdf-field.beta-init 0.1 \
--pipeline.model.sdf-field.hash-features-per-level 2 \
--pipeline.model.sdf-field.log2-hashmap-size 19 \
--pipeline.model.level-init 4 \
--trainer.steps-per-eval-image 1000 \
--pipeline.datamanager.train-num-rays-per-batch 2048 \
--trainer.steps_per_save 10000 --trainer.max_num_iterations 500001 \
--pipeline.model.background-model grid \
--vis wandb --experiment-name bakedangelo-bicycle \
--machine.num-gpus 2 mipnerf360-data \
--data data/nerfstudio-data-mipnerf360/bicycle
for mesh extraction, i set --bounding-box-min -2.0 -2.0 -2.0 --bounding-box-max 2.0 2.0 2.0 --resolution 4096 --simplify-mesh True
I notice that the forground is much better. Shown in the figure below, the wire for bicycle wheels can be reconstructed which is difficult for others methods.
however, the background mesh is still miss, even though i set
--bounding-box-min -4.0 -4.0 -4.0 --bounding-box-max 4.0 4.0 4.0
while the background tree and bush appears in the rendered depth map.
I am wondering why and how i can get the forground and background mesh at the same time. Thank you!
Also the ground reconstructed mesh has large voids, perhaps because of the network capacity?