sdfstudio
sdfstudio copied to clipboard
Why is there such a big difference in the results between the models trained with nerfacto and neus-facto?
Describe the bug
ns-train neus-facto --pipeline.model.sdf-field.inside-outside False --pipeline.model.sdf-field.bias 0.3 --pipeline.model.background-model grid --pipeline.model.eikonal-loss-mult 0.0001 nerfstudio-data --data data/nerfstudio/tangtousimple
But the result generated through the under command is rendered almost identical to a photograph.
ns-train nerfacto --data data/sdfstudio/tangtousimple
a litte better after add --pipeline.model.sdf-field.use-appearance-embedding True
Hi, could you check how the near and far plane are set? Could you try to manually set the near and far from CLI with --pipeline.model.near-plane 0.01 --pipeline.model.far-plane 1000 --pipeline.model.overwrite-near-far-plane True --pipeline.model.background-model grid
?
We have just updated the neus-facto
to also use proposal network for background modeling which will likely generate better results.
@niujinshuchong It got worse after adding these parameters.
ns-train neus-facto --pipeline.model.sdf-field.inside-outside False --pipeline.model.sdf-field.use-appearance-embedding True --pipeline.model.sdf-field.bias 0.3 --pipeline.model.near-plane 0.01 --pipeline.model.far-plane 1000 --pipeline.model.overwrite-near-far-plane True --pipeline.model.background-model grid --pipeline.model.eikonal-loss-mult 0.0001 nerfstudio-data --data data/nerfstudio/tangtousimple
I will try the updated version again, as well as the bakedangelo model. Thanks for your work.
@niujinshuchong The latest code throw error: AttributeError: 'NeuSFactoModel' object has no attribute 'curvature_loss_multi_factor'
@niujinshuchong What is the minimum VRAM requirement for running BakedAngelo? I'm experiencing a shortage of graphics memory when running the BakedAngelo model with 130 images. My VRAM is 12GB.
@xiemeilong The default setting of bakedangelo runs on 24 GB GPUs. You could try to reduce the number of training rays. Btw, why do you use a very small eikonal loss?
@niujinshuchong Because the training effect was very poor using the default configuration, I found this parameter 0.0001 from your example and it improved the result. Moreover, changing it to 0.001 made it worse.
@xiemeilong Do you mind sharing your data? I can check when I have time.
@niujinshuchong Even when I set my pipeline.datamanager.train_num_rays_per_batch to 256, I still receive a CUDA out of memory error.
@niujinshuchong I am sorry,as it is an electrical facility, the images is not allowed to be made public.
OK, you can also reduce the learnable parameters of bakedangelo such as log2_hashmap_size and hash_features_per_level.
--pipeline.model.near-plane 0.01 --pipeline.model.far-plane 1000 --pipeline.model.overwrite-near-far-plane True --pipeline.model.background-model grid
I have the same question. Use nerfstudio 's nerfacto,
I didnt adjust so many parameters, actually nothing,
however, the result was quite good.
Hi, currently the sdf-based methods are not very robust. As many people found that nerfacto
can get good result, we will try to add a new option to use nerfacto for pre-training and then switch to SDF. Will make an update if it works.
@XinyueZ Could you share your data so that we could have a try and compare?
@niujinshuchong The upper part of the object was not captured. Could this be the reason why the SDF model failed to reconstruct it well?
Today I took new photos and once again achieved excellent results using Nerfacto, while Neus-Facto didn't work well. I can share this data with you.
ns-train neus-facto --pipeline.model.sdf-field.inside-outside False --pipeline.model.sdf-field.use-appearance-embedding True --pipeline.model.sdf-field.bias 0.3 --pipeline.model.near-plane 0.01 --pipeline.model.far-plane 1000 --pipeline.model.overwrite-near-far-plane True --pipeline.model.background-model grid nerfstudio-data --data data/nerfstudio/shizi
https://drive.google.com/file/d/1CMbkWRTDJB6DIod3B2tPsIitpJGWYyPn/view
I have the same question, but with bakedangelo method,
@xiemeilong I don't have access permission for your data.
@flow-specter Looks the model overfits to the background color. Maybe you could try to set the level_init to a smaller value.
@niujinshuchong https://drive.google.com/file/d/1Yv4cI9OebVoWIxVlYQc_YYafiVnv7jni/view?usp=sharing
@xiemeilong I just downloaded your data but it only has images and camera information is missing? Also the images is different as the one you shared above?
Maybe you could check whether some of the outputs are NaN? When I am using nerf-facto on my own outdoor dataset, some of the assertions failed at the first eval image. Then I added some assertions of NaN and found that NaN appeared at a very early stage of training (about 250 iterations), but the program didn't stop until the first eval image.
@meneldil12555 I think I met the same problem with yours: I found that when I use neus-facto(bakedsdf won't) to train my outdoor dataset and it will breakdown in every first eval(I checked the nvidia-smi and the VRAM consumption will continously increasing until OOM, then the running stoped). So what is the "output " you mentioned, color/density...? How to check it or log it? Thanks.
@meneldil12555 I think I met the same problem with yours: I found that when I use neus-facto(bakedsdf won't) to train my outdoor dataset and it will breakdown in every first eval(I checked the nvidia-smi and the VRAM consumption will continously increasing until OOM, then the running stoped). So what is the "output " you mentioned, color/density...? How to check it or log it? Thanks.
At that time I found the losses are NaN from wandb logs, and during the first eval assertions of the output image will stop the program. If this is your problem you can try to add some more assertions at the model or field part of code.