sdfstudio
sdfstudio copied to clipboard
Are you planning to support Neuralangelo?
The work is amazing, are you planning to support Neuralangelo, which shows great details
Hi, we are working on it. Will push it when it's done.
Are you going to implement it from the paper ? because I didn;t find any published code from nvidia
Hi, neuralangelo
is now implemented and it's further combined with bakedsdf
and neus-facto
to have better background modeling and more efficient points sampling. Please check it out! Here are some of the reconstruction results:
this is looking great, thank you!
can you pls share the command lines you used for the Barn dataset as an example for using the new pipeline? I can not reproduce your results
@cdcseacave The training command is ns-train bakedangelo --machine.num-gpus 1 --pipeline.model.level-init 8 --trainer.steps-per-eval-image 5000 --pipeline.datamanager.train-num-rays-per-batch 2048 --pipeline.datamanager.eval-num-rays-per-batch 512 --pipeline.model.sdf-field.use-appearance-embedding True --pipeline.model.background-color white --pipeline.model.sdf-field.bias 0.1 --pipeline.model.sdf-field.inside-outside False --pipeline.model.background-model grid --vis wandb --experiment-name barn_colmap_highres nerfstudio-data --data data/Barn_colmap --downscale-factor 1
.
You can download the data from here: https://drive.google.com/file/d/1RR_aLJSAqv75tzRe7BUc8_pdor6jB0U2/view?usp=sharing.
Will update the docs soon.
@cdcseacave The training command is
ns-train bakedangelo --machine.num-gpus 1 --pipeline.model.level-init 8 --trainer.steps-per-eval-image 5000 --pipeline.datamanager.train-num-rays-per-batch 2048 --pipeline.datamanager.eval-num-rays-per-batch 512 --pipeline.model.sdf-field.use-appearance-embedding True --pipeline.model.background-color white --pipeline.model.sdf-field.bias 0.1 --pipeline.model.sdf-field.inside-outside False --pipeline.model.background-model grid --vis wandb --experiment-name barn_colmap_highres nerfstudio-data --data data/Barn_colmap --downscale-factor 1
.You can download the data from here: https://drive.google.com/file/d/1RR_aLJSAqv75tzRe7BUc8_pdor6jB0U2/view?usp=sharing.
Will update the docs soon.
Thanks for your awesome contribution! I wonder which path I can to get the normal map? I'm trying to use your training command and I don't know where to check results during the training process?
Hi,
neuralangelo
is now implemented and it's further combined withbakedsdf
andneus-facto
to have better background modeling and more efficient points sampling. Please check it out! Here are some of the reconstruction results:
Also, can I ask that the training time is ?
It is very slow on my RTX 3060 12G while running BakedAngelo; it took one hour to complete only 1%.
It is very slow on my RTX 3060 12G while running BakedAngelo; it took one hour to complete only 1%.
Hmm, me too, is there some way speeding up?
@xiemeilong @flow-specter The model is trained on a A100 GPUs with 40GB vram for ~40 hours.
@flow-specter The visualizationa are saved to the output file if you use wandb and can be checked in the wandb website. But you can also use tensorboard by --vis tensorboard
.
@niujinshuchong Thanks for your hard work, have you compared the quality value(psnr/ssim/lpips) of bakedneural and bakedsdf on mipnerf360 dataset, just like garden, By the way, which model is better in 360 unbounded dataset? bakedangelo > bakedsdf ?, thank you.
@tianxiaguixin002 I don't have the comparison currently. But based on my previous experiments, bakedsdf with hash encoding usually produce noisy results and using the progressive training and the curvature loss from neuralangelo will help to get more smooth results.
Hi,
neuralangelo
is now implemented and it's further combined withbakedsdf
andneus-facto
to have better background modeling and more efficient points sampling. Please check it out! Here are some of the reconstruction results:
Hi. can you share the mesh result? My version contains many floaters
@flow-specter The meshes can be downloaded here: https://drive.google.com/drive/folders/1nDGyEaE1aXKCrWmSUZXifkH7sklaQg3Q?usp=share_link. How is your result look like?
@flow-specter The meshes can be downloaded here: https://drive.google.com/drive/folders/1nDGyEaE1aXKCrWmSUZXifkH7sklaQg3Q?usp=share_link. How is your result look like?
You can see many floaters in the sky area
@flow-specter The floaters corresponds to the sky region since there is not texture for the sky. I think this is normal (and random) since we don't have special modeling for the sky.
@flow-specter The floaters corresponds to the sky region since there is not texture for the sky. I think this is normal (and random) since we don't have special modeling for the sky.
But the Barn scene seems cleaner and has less floater in the sky?
@cdcseacave The training command is
ns-train bakedangelo --machine.num-gpus 1 --pipeline.model.level-init 8 --trainer.steps-per-eval-image 5000 --pipeline.datamanager.train-num-rays-per-batch 2048 --pipeline.datamanager.eval-num-rays-per-batch 512 --pipeline.model.sdf-field.use-appearance-embedding True --pipeline.model.background-color white --pipeline.model.sdf-field.bias 0.1 --pipeline.model.sdf-field.inside-outside False --pipeline.model.background-model grid --vis wandb --experiment-name barn_colmap_highres nerfstudio-data --data data/Barn_colmap --downscale-factor 1
.You can download the data from here: https://drive.google.com/file/d/1RR_aLJSAqv75tzRe7BUc8_pdor6jB0U2/view?usp=sharing.
Will update the docs soon.
hi, how do you process courthouse data? When I use my own data (nerfstudio datastructure), I can't get reasonable results
Hi,
neuralangelo
is now implemented and it's further combined withbakedsdf
andneus-facto
to have better background modeling and more efficient points sampling. Please check it out! Here are some of the reconstruction results:
@niujinshuchong Hi, I used the same setting as you provided and got the result with collapsed roof.
The only modification I made was changing the hash resolution 64->4096 to 32->2048, which was used in the paper of Neuralangelo.
Are there any other implementation details that I missed? Thanks.
@sta105 I also observed similar artefacts. It is a bit random unfortunately. Reg 64 -> 4096: this is the same as the paper since our input range is -2 to 2 (for compatibility with bakedsdf's background modeling) and the paper uses -1 to 1 with 32 -> 2048.
@XianSifan --pipeline.model.sdf-field.log2_hashmap_size 20
When I changed the --pipeline.model.sdf-field.num-levels=32, I met this problem...
@cdcseacave The training command is
ns-train bakedangelo --machine.num-gpus 1 --pipeline.model.level-init 8 --trainer.steps-per-eval-image 5000 --pipeline.datamanager.train-num-rays-per-batch 2048 --pipeline.datamanager.eval-num-rays-per-batch 512 --pipeline.model.sdf-field.use-appearance-embedding True --pipeline.model.background-color white --pipeline.model.sdf-field.bias 0.1 --pipeline.model.sdf-field.inside-outside False --pipeline.model.background-model grid --vis wandb --experiment-name barn_colmap_highres nerfstudio-data --data data/Barn_colmap --downscale-factor 1
. You can download the data from here: https://drive.google.com/file/d/1RR_aLJSAqv75tzRe7BUc8_pdor6jB0U2/view?usp=sharing. Will update the docs soon.hi, how do you process courthouse data? When I use my own data (nerfstudio datastructure), I can't get reasonable results
@flow-specter How did you solve using your own data and the nerf studio data structure?
@flow-specter The meshes can be downloaded here: https://drive.google.com/drive/folders/1nDGyEaE1aXKCrWmSUZXifkH7sklaQg3Q?usp=share_link. How is your result look like?
Hi! This cloud file is empty. Could you update it?