sdfstudio
sdfstudio copied to clipboard
Custom images with colmap get poor results.
Describe the bug
I used DTU images(1600,1200) to build sdfstudio(meta_data.json) from colmap, tested neus-facto, and got a poor results.
Here is my process:
1. I ran the colmap(hloc) and got transfroms.json. The command is:
ns-process-data images
--data ./data/dtu/scan114/image
--sfm_tool hloc
--output-dir ./data/dtu/scan114/nerfstudio
2. I ran the process_nerfstudio_to_sdfstudio.py scripts to get meta_data.json. The command is:
python scripts/datasets/process_nerfstudio_to_sdfstudio.py
--data ./data/dtu/scan114/nerfstudio
--output-dir ./data/dtu/scan114/sdfstudio
--data-type colmap
--scene-type object
3. Then I used neus-facto to start training the model. The command is:
ns-train neus-facto
--experiment-name dtu
--trainer.max-num-iterations 20001
--trainer.save-only-latest-checkpoint False
--trainer.steps_per_save 1000
--pipeline.model.sdf-field.inside-outside False
--vis tensorboard sdfstudio-data
--data ./data/dtu/scan114/sdfstudio
ns-train neus-facto
--experiment-name dtu
--trainer.max-num-iterations 20001
--trainer.save-only-latest-checkpoint False
--trainer.steps_per_save 1000
--pipeline.model.sdf-field.inside-outside False
--vis tensorboard nerfstudio-data
--data ./data/dtu/scan114/nerfstudio
4. Finally, extract mesh with:
ns-extract-mesh
--load-config outputs/dtu/neus-facto/2023-12-28_170416/config.yml
--output-path outputs/dtu/neus-facto/2023-12-28_170416/dtu.ply
ns-extract-mesh
--load-config outputs/dtu/neus-facto/2023-12-28_164550/config.yml
--output-path outputs/dtu/neus-facto/2023-12-28_164550/dtu.ply
But i get wrong mesh. Here is my results:
nerfstudio-data:
sdfstudio-data:
Which step went wrong?
Could you please send a picture of the camera poses from the viewer? I figured out that the camera poses are not correctly transformed in step 2 (as per your description).
Thank you for your reply! And I want to correct the size of my image, my input size is (384, 384).
I got the camera pose from colmap, and here is my camera pose.
colmap:
intrinsics:
extrinsics:
nerfstudio data:
sdfstudio data:
Thanks! The pose looks good to me. Could you show me the pose in the nerfstudio viewer (after you have executed process_nerfstudio_to_sdfstudio.py), too?
Here is the capture of the sdtstudio-data in viewer:
video: https://github.com/autonomousvision/sdfstudio/assets/67808446/8cf67de9-ae0f-4fbe-89a4-7eea31c2fad6
Hi, I think it might due to the camera normalization. After using colmap, the pose is normalized to [-1, 1] cube but the real object might not be centered in the origin anymore. And in this case, the near and far might not be set correctly to cover the object. I think you can compare you colmap poses with the provided poses.
Hi, I think it might due to the camera normalization. After using colmap, the pose is normalized to [-1, 1] cube but the real object might not be centered in the origin anymore. And in this case, the near and far might not be set correctly to cover the object. I think you can compare you colmap poses with the provided poses.
Could you give me an example of how to fix it?
Hi, I think it might due to the camera normalization. After using colmap, the pose is normalized to [-1, 1] cube but the real object might not be centered in the origin anymore. And in this case, the near and far might not be set correctly to cover the object. I think you can compare you colmap poses with the provided poses.
Could you give me an example of how to fix it?
I think you can use the idr preprocessing method. It can normalized pose to [-1, 1] cube.