neuralangelo
neuralangelo copied to clipboard
Bad quality using official Neuralangelo compared to sdfstudio Implementation
I obtained very good results using neuralangelo in sdfstudio, but for the same image data, the results obtained using the official neuralangelo implementation were like this.
results using neuralangelo in sdfstudio
Hi @xiemeilong
Thanks for reporting this. This is indeed odd and unexpected. To narrow down the problem, could you help by providing the following information?
- Do you use the same preprocessing steps for both implementations? If not, could you point me to the preprocessing step in sdfstudio?
- Can you share configs/commands for both implementations? This helps me to compare the hyperparameters if there are differences.
- Can you share the mesh extraction command for both implementations?
Thank you!
Hi @xiemeilong, in addition to the above @mli0603 mentioned, we also have a fix (#41) on the scripts. If you were extracting the mesh from an earlier checkpoint, please pull again and let us know if the issue persists. Thanks!
same poor mesh too
@chenhsuanlin It still does not work with the latest code.
@mli0603
preprocessing cmd in neuralangelo:
bash projects/neuralangelo/scripts/run_colmap.sh ../data/neuralangelo/tangtou
python3 projects/neuralangelo/scripts/convert_data_to_json.py --data_dir ~/labs/data/neuralangelo/tangtou/dense --scene_type outdoor
python3 projects/neuralangelo/scripts/generate_config.py --experiment_name tangtou --data_dir ~/labs/data/neuralangelo/tangtou/dense/ --auto_exposure_wb
preprocessing cmd in sdfstudio:
ns-process-data images --data tangtou --output-dir nerfstudio/tangtou --num-downscales 2
train command in neuralangelo:
torchrun --nproc_per_node=1 train.py --logdir=logs/my/tangtou --config=/home/xxx/labs/neuralangelo/projects/neuralangelo/configs/custom/tangtou.yaml --show_pbar
train command in sdfstudio, I am using the same test prompts as the author of sdfstudio, not the pure neuralangelo algorithm:
ns-train bakedangelo --machine.num-gpus 1 --pipeline.model.level-init 8 --trainer.steps-per-eval-image 5000 --pipeline.datamanager.train-num-rays-per-batch 2048 --pipeline.datamanager.eval-num-rays-per-batch 512 --pipeline.model.sdf-field.use-appearance-embedding True --pipeline.model.background-color white --pipeline.model.sdf-field.bias 0.1 --pipeline.model.sdf-field.inside-outside False --pipeline.model.background-model grid nerfstudio-data --data data/nerfstudio/tangtou
extract command in neuralangelo:
torchrun --nproc_per_node=1 projects/neuralangelo/scripts/extract_mesh.py \
--config=/home/xxx/labs/neuralangelo/projects/neuralangelo/configs/custom/tangtou.yaml \
--checkpoint=logs/my/tangtou/epoch_01736_iteration_000500000_checkpoint.pt \
--output_file=tangtou.ply \
--resolution=2048 \
--block_res=128
extract command in sdfstudio:
ns-extract-mesh --load-config outputs/nerfstudio-tangtousimple/bakedangelo/2023-07-30_123845/config.yml --output-path meshes/tangtousimple.ply --resolution 2048 --marching_cube_threshold 0.001 --create_visibility_mask True
@xiemeilong @zz7379 we have pushed an update to main
yesterday that fixed a checkpoint issue which may be related. Could you pull and try running the pipeline again? Please let me know if there are further issues with the latest code.
@chenhsuanlin The latest version still doesn't work.
@xiemeilong Hi :) Do you still have good results in Neuralangelo in sdfstudio and bad results in official Neuralangelo?
@iam-machine I did not do any more testing.