gaussian-splatting
                                
                                 gaussian-splatting copied to clipboard
                                
                                    gaussian-splatting copied to clipboard
                            
                            
                            
                        Large scene, but generated less point
Hi,
Thank you for the wonderful work.
I have referred to the previous post #161 , but here the number of generated points are less, not decrease.
I captured 3k images on a street around 200m. After training, my result is relative good but not detailed (blur) as the picture below.
- 
I checked the number of generated points. For the first 7k steps, num of points was 198K+. After 30k steps, number of points was 281,801+ points, and then this number is not changed for 60k, 90k and even 300k steps. The parameters and other setting referred and changed including learning rate, scale, threshold. 
- 
However, if I captured 300 images around a tree on the street, the number of generated points was 2.7+M, ten times bigger than the street scene. 
Can anybody help me to understand the reason like that? And how can I improve result for street?
Thank you very much.
I also have a similar problem. One explanation I think could be that the densification happens based on averaged positional gradient and if you have images, which are far from the given point, then this gradient will be lower thus pushing it below the threshold.
(Changing this threshold might help a bit, but I did not have too much success with it so far...)
Another question is the camera calibration, are you doing it with colmap or do you have poses for it? Using an exhaustive matching helped a bit for me, but that is also quite expensive to do...
How did you modify the opacity reset intervals and and densification interval for your usecase?
Ps.: Scaffold-GS promisese to improve upon this problem a bit :)
I also have this problem and I'm interested in knowing more, is there an optimal metric size of the scene?
I also have a similar problem. One explanation I think could be that the densification happens based on averaged positional gradient and if you have images, which are far from the given point, then this gradient will be lower thus pushing it below the threshold.
(Changing this threshold might help a bit, but I did not have too much success with it so far...)
Another question is the camera calibration, are you doing it with colmap or do you have poses for it? Using an exhaustive matching helped a bit for me, but that is also quite expensive to do...
How did you modify the opacity reset intervals and and densification interval for your usecase?
Ps.: Scaffold-GS promisese to improve upon this problem a bit :)
Hi @Ph03n1xdust , -"Another question is the camera calibration, are you doing it with colmap or do you have poses for it": I did it with colmap, exhaustive matching. I check from colmap GUI, camera poses are fine, and even that rendering in interactive mode of GS is fine. So I think colmap result should be good. -"How did you modify the opacity reset intervals and and densification interval for your usecase?": I used default value for opacity reset, but 3 times smaller value for densification. -" One explanation I think could be that the densification happens based on averaged positional gradient": Maybe reasonable. Let me check with a very small value and update.
I also have a similar problem. One explanation I think could be that the densification happens based on averaged positional gradient and if you have images, which are far from the given point, then this gradient will be lower thus pushing it below the threshold.
(Changing this threshold might help a bit, but I did not have too much success with it so far...)
Another question is the camera calibration, are you doing it with colmap or do you have poses for it? Using an exhaustive matching helped a bit for me, but that is also quite expensive to do...
How did you modify the opacity reset intervals and and densification interval for your usecase?
Ps.: Scaffold-GS promisese to improve upon this problem a bit :) Hi @Ph03n1xdust ,
"One explanation I think could be that the densification happens based on averaged positional gradient and if you have images, which are far from the given point, then this gradient will be lower thus pushing it below the threshold." -> I divided large scene into small ones using the same images, the number of points increased. For example, I divided the road in to segment A, B, C. Number of points in each segment A, B, C increase although each segment contains far images also. I decrease threshold, it got more floaters and rendering was also not good. I think there are some other parameters somewhere in the code which I don't understand.
Hope someone can help.
This issue might be related to what the author mentioned in the FAQ about "the approach can struggle in multi-scale detail scenes (extreme close-ups, mixed with far-away shots)." However, reducing the values of --position_lr_init, --position_lr_final, and --scaling_lr has limited effect. I tried to minimize the interference between different scales by reducing the gradient of distant pixels based on depth, but it was ineffective(maybe my implementation is incorrect). Additionally, the accuracy of colmap poses might also be a contributing factor to the problem, as even minor errors can lead to significant misalignment in distant pixels.
This issue might be related to what the author mentioned in the FAQ about "the approach can struggle in multi-scale detail scenes (extreme close-ups, mixed with far-away shots)." However, reducing the values of --position_lr_init, --position_lr_final, and --scaling_lr has limited effect. I tried to minimize the interference between different scales by reducing the gradient of distant pixels based on depth, but it was ineffective(maybe my implementation is incorrect). Additionally, the accuracy of colmap poses might also be a contributing factor to the problem, as even minor errors can lead to significant misalignment in distant pixels.
Do you have any new developments? thank you
Set densify_until_iter to 0 to avoid modifying the point cloud and use the original point cloud. If the number of points in the point cloud is insufficient, you can increase the number of points during photo alignment, use a dense point cloud, or modify the sparse point cloud in Blender by adding points where there are fewer, to improve the outcome.
One more question, my generated point clouds are too sparse in comparison, which may be related to point cloud waste due to background clutter, but I'd like to know if there's any way to increase the density of the point cloud per generation