Gaussian splatting training iteration impact on final output mesh quality
Hey,
I have a question regarding to impact of Gaussian splatting training iteration times on mesh quality.
As the README points out, 7000 iterations on Gaussian splatting training is recommended. However the output result looks not smooth enough to me.
These are the training command I used. I extract 300 images from a video footage
python gaussian_splatting/train.py -s /home/ubuntu/home_office/ --iterations 7000 -m /home/ubuntu/home_office/gaussian_splatting/home_office
python train.py -s /home/ubuntu/home_office/ -c /home/ubuntu/home_office/gaussian_splatting/home_office/ -r sdf --high_poly True --postprocess_mesh True --square_size 5
Will increasing Gaussian splatting training iteration improve the mesh quality?
Hello @MagicJoeXZ,
The number of iterations might affect the results, but I don’t think that’s the main issue here. Can I ask, what do your capture/training images look like? Did you take a 360° shot around the chair?
From your image, it looks like the foreground object (the chair) is well reconstructed. However, I agree that the background has a more chaotic geometry. This is a common issue: Most of the time, 3D reconstruction algorithms produce poor results for background elements, for two main reasons:
- The background is usually made of textureless, monochrome surfaces, which create an unsolvable mathematical ambiguity in the inverse projective problem.
- Unlike foreground objects, we usually have much fewer different viewpoints for the background in training images.
Nevertheless, you have several options to improve your results or renders:
- The simple but dumb idea: Change the shading and choose an ambient lighting. The geometry of the background is not accurate, but changing the shading will hide those bad bumps 😄
- You can try the
--low_poly Trueconfiguration. Actually, using fewer polygons is a natural way to smooth the surface and remove some unnecessary bumps. Moreover, when using fewer polygons, SuGaR removes polygons first in areas where they are less needed (low curvature, etc.), so the quality of your geometry will not necessarily decrease. On the website and in the paper, there are several scenes reconstructed with the low_poly configuration (such as the white and red knight or the playroom for example). You can also adjust the number of vertices by yourself. For information, when using--low_poly True, the target number of vertices is 200,000. When usingtrain.py, you can removelow_poly Trueand use the argument--n_vertices_in_mesh 500_000instead to adjust the number of vertices to 500,000, for instance. Please refer to theREADME.mdfile for more details. - (EDITED) If you want to keep a high number of vertices but have a smoother surface, you can increase the mesh regularization factor used during refinement. You just have to increase the value of
'normal_consistency_factor': 0.1,at line 160 intrain.py.
Finally, you can also try to run SuGaR without the postprocessing. Actually, most of the time, postprocessing is not needed as it is designed to remove some very specific artifacts in SuGaR meshes, that appear only if some very fine objects are not covered enough by your capture. Moreover, the current postprocessing strategy can sometimes remove a small number of triangles that shouldn’t be removed, so it can be dangerous (maybe you’ll notice some very small holes in the mesh). That’s why the postprocessing is deactivated by default. So don’t worry about that, you can remove this option when running SuGaR.
Thanks @Anttwo!
Can I ask, what do your capture/training images look like? Did you take a 360° shot around the chair?
I take a video of the scene and extract frames from video to train the model. I did take a 360° shot around the chair.
I'll try the suggestion you gave!
You're welcome @MagicJoeXZ, I’m always happy to help!
Okay, since you took a 360° shot around the chair, it's expected to get a nice-looking chair but a messy background. If you try to take more images of the background elements in your room, you might get a better background (similar to the playroom scene from the DeepBlending dataset, that SuGaR can reconstruct well).
It depends on what you want to achieve: a 360° capture around a single object is ideal to get a high-quality foreground object, and a comprehensive trajectory that covers the entire scene will give a more balanced geometry for both foreground and background.
Sure, I'm looking forward to your answer!
Woops, sorry, I made a small mistake in my previous message.
To change the mesh regularization factor, you should not change surface_mesh_laplacian_smoothing_factor at line 166 as this is not used haha; you should change 'normal_consistency_factor': 0.1, at line 160 in train.py.
Sorry for that.
Hey @Anttwo,
Thanks for all the response!
Sorry I didn't explicitly express my use case. Instead of capture the chair it self, I was trying to capture the entire indoor scene. That's why I pass -r sdf
Besides the smoothing tips, do you have any suggestions on how to capture and reconstruction indoor scene?
More screenshots
Hey @MagicJoeXZ,
I think I may have what you need, for both capture and smoothing.
I pushed a small change to sugar_extractors/coarse_mesh.py and updated the Tips section of the README.md file.
Please refer to this README.md file for more details. For example, I added some tips to not only remove holes in the mesh, but also reduce these messy ellipsoidal bumps you have on your surface.
Here is the interesting part from the README.md file:
4. I have holes in my mesh, what can I do?
If you have holes in your mesh, this means the cleaning step of the Poisson mesh is too aggressive for your scene. You can reduce the treshold
vertices_density_quantileused for cleaning by modifying line 43 ofsugar_extractors/coarse_mesh.py. For example, you can change this line fromvertices_density_quantile = 0.1to
vertices_density_quantile = 0.5. I have messy ellipsoidal bumps on the surface of my mesh, what can I do?
Depending on your scene, the default hyperparameters used for Poisson reconstruction may be too fine compared to the size of the Gaussians. Gaussian could then become visible on the mesh, which results in messy ellipsoidal bumps on the surface of the mesh. This could happen if the camera trajectory is very close to a simple foreground object, for example.
To fix this, you can reduce the depth of Poisson reconstructionpoisson_depthby modifying line 42 ofsugar_extractors/coarse_mesh.py.
For example, you can change line 42 frompoisson_depth = 10to
poisson_depth = 7You may also try
poisson_depth = 6, orpoisson_depth = 8if the result is not satisfying.
It could also be nice to change the depth of the Poisson reconstruction depending on reconstructing the foreground or the background. For example, here, your chair in the foreground seems to have less bumps than the background in the scene. I'm going to try to compute automatically the best Poisson depth depending on the scene, and push it to the code.
Thanks @Anttwo for quick response and also update on the readme. I'll try your suggestions and compare the output!
Thanks @Anttwo!
- I pulled the latest change
- change
vertices_density_quantile = 0,poisson_depth = 8. - Don't do postprocessing
- Pass high poly as False
The results I have:
- The results is much smoother which is good
- But also more blobby, I guess that's because poisson_depth change?