Is there a good method for mesh quality?
Is there a good way to improve the mesh quality? ns-train nerfacto --pipeline.model.predict-normals True --data output/scan --pipeline.datamanager.train-num-rays-per-batch 2000 I'm currently doing this, but I don't know if the Poisson mesh quality is much better than instant-ngp windows.
Meshes generated by any Nerf will be quite rough. There isnt much you can do for smooth meshes. Use neus and other methods supported by SDFStudio.
Meshes generated by any Nerf will be quite rough. There isnt much you can do for smooth meshes. Use neus and other methods supported by SDFStudio.
Thanks :) , I tried to run sdfstudio using nerf process data with depth,normal,rgb,foregroundmask, but a completely different mesh came out, so I wondered if I could get a good mesh from nerfstudio that works normally, but i have no choice.
Can you show me the mesh generated by sdfstudio and nerfstudio? The initialization parameters must be set properly when using sdfstudio or the results will look ugly.
Here is an example.
The first picture shows the mesh generated from depth-nerfacto->Poisson.
The second picture shows the mesh generated from neus-facto.
Can you show me the mesh generated by sdfstudio and nerfstudio? The initialization parameters must be set properly when using sdfstudio or the results will look ugly. Here is an example. The first picture shows the mesh generated from depth-nerfacto->Poisson. The second picture shows the mesh generated from neus-facto.
![]()
(https://github.com/autonomousvision/sdfstudio/issues/277) here is my problem i edit foreground mask=true and add foreground path my dataset is like this jsonfile and input data in docker
Can you show me the mesh generated by sdfstudio and nerfstudio? The initialization parameters must be set properly when using sdfstudio or the results will look ugly. Here is an example. The first picture shows the mesh generated from depth-nerfacto->Poisson. The second picture shows the mesh generated from neus-facto.
![]()
@Ar2000S How can i set initial parameter properly??
Can you show me the mesh generated by sdfstudio and nerfstudio? The initialization parameters must be set properly when using sdfstudio or the results will look ugly. Here is an example. The first picture shows the mesh generated from depth-nerfacto->Poisson. The second picture shows the mesh generated from neus-facto.
![]()
My Neus result is totally worse
Sorry for the (very) late reply. I tried to run mono-neus and also neus-facto on your dataset. The result, in short, came out to be an egg.
The parameters I used usually give a rough shapeain around 5k iterations, but your dataset did'nt. Maybe it is because of the extremely small image size? Can you provide the original RGB images? I'll see if that does beter.
As for the foreground, you will have to add the filepaths in the meta_data as well.
These are the paras I used.
--pipeline.model.sdf-field.geometric-init True
--pipeline.model.background-model mlp
--pipeline.model.sdf-field.use-grid-feature True
--pipeline.model.sdf-field.use-appearance-embedding True
--pipeline.model.sdf-field.inside-outside False
--pipeline.model.mono-normal-loss-mult 0.01
--pipeline.model.mono-depth-loss-mult 0.01
--pipeline.model.near-plane 0.05
--pipeline.model.far-plane 100
--pipeline.model.overwrite-near-far-plane True
--pipeline.model.sdf-field.bias 0.3
--pipeline.model.eikonal-loss-mult 0.01
--pipeline.model.sdf-field.hash-features-per-level 2
--pipeline.model.sdf-field.num-layers 1
--pipeline.model.sdf-field.position-encoding-max-degree 8
Sorry for the (very) late reply. I tried to run mono-neus and also neus-facto on your dataset. The result, in short, came out to be an egg.
The parameters I used usually give a rough shapeain around 5k iterations, but your dataset did'nt. Maybe it is because of the extremely small image size? Can you provide the original RGB images? I'll see if that does beter. As for the foreground, you will have to add the filepaths in the meta_data as well. These are the paras I used. --pipeline.model.sdf-field.geometric-init True --pipeline.model.background-model mlp --pipeline.model.sdf-field.use-grid-feature True --pipeline.model.sdf-field.use-appearance-embedding True --pipeline.model.sdf-field.inside-outside False --pipeline.model.mono-normal-loss-mult 0.01 --pipeline.model.mono-depth-loss-mult 0.01 --pipeline.model.near-plane 0.05 --pipeline.model.far-plane 100 --pipeline.model.overwrite-near-far-plane True --pipeline.model.sdf-field.bias 0.3 --pipeline.model.eikonal-loss-mult 0.01 --pipeline.model.sdf-field.hash-features-per-level 2 --pipeline.model.sdf-field.num-layers 1 --pipeline.model.sdf-field.position-encoding-max-degree 8 @Ar2000S https://drive.google.com/drive/folders/1Q9zSM8sCsQ4n-6QKl4GEujuznx-abxvh Here is my battety images



The parameters I used usually give a rough shapeain around 5k iterations, but your dataset did'nt. Maybe it is because of the extremely small image size? Can you provide the original RGB images? I'll see if that does beter. As for the foreground, you will have to add the filepaths in the meta_data as well. These are the paras I used. --pipeline.model.sdf-field.geometric-init True --pipeline.model.background-model mlp --pipeline.model.sdf-field.use-grid-feature True --pipeline.model.sdf-field.use-appearance-embedding True --pipeline.model.sdf-field.inside-outside False --pipeline.model.mono-normal-loss-mult 0.01 --pipeline.model.mono-depth-loss-mult 0.01 --pipeline.model.near-plane 0.05 --pipeline.model.far-plane 100 --pipeline.model.overwrite-near-far-plane True --pipeline.model.sdf-field.bias 0.3 --pipeline.model.eikonal-loss-mult 0.01 --pipeline.model.sdf-field.hash-features-per-level 2 --pipeline.model.sdf-field.num-layers 1 --pipeline.model.sdf-field.position-encoding-max-degree 8
@Ar2000S
https://drive.google.com/drive/folders/1Q9zSM8sCsQ4n-6QKl4GEujuznx-abxvh
Here is my battety images