gaussian-splatting
gaussian-splatting copied to clipboard
How should the background of an unbounded scene be handled?
When I deal with unbounded scenes, the rendering quality of the sky and other distant views can be poor due to the missing point cloud data, or they may appear as floaters near my foreground. How should I handle this?
Such a scene might in fact benefit from modifications to the code! It could be quite easily done, if you're handy with Python: detect the min and max point coordinates in 3D, create like ~100k-1M points with random color on a uniform sphere that is like 10x the radius of the scene around its center, and add it to the points that are being loaded from COLMAP.
If this sounds too difficult, maybe i could drop some code that does something like this.
Such a scene might in fact benefit from modifications to the code! It could be quite easily done, if you're handy with Python: detect the min and max point coordinates in 3D, create like ~100k-1M points with random color on a uniform sphere that is like 10x the radius of the scene around its center, and add it to the points that are being loaded from COLMAP.
If this sounds too difficult, maybe i could drop some code that does something like this.
Thank you for the suggestion! Could you please provide more details on how to implement this modification? Also, would you mind sharing the code that you mentioned? That would be very helpful.
Such a scene might in fact benefit from modifications to the code! It could be quite easily done, if you're handy with Python: detect the min and max point coordinates in 3D, create like ~100k-1M points with random color on a uniform sphere that is like 10x the radius of the scene around its center, and add it to the points that are being loaded from COLMAP. If this sounds too difficult, maybe i could drop some code that does something like this.
Thank you for the suggestion! Could you please provide more details on how to implement this modification? Also, would you mind sharing the code that you mentioned? That would be very helpful.
Hi, I implemented it just now: https://github.com/yzslab/gaussian-splatting-lightning/blob/e258e08dbd058c538ffb751cb296499940456ed2/internal/dataset.py#L232
https://github.com/graphdeco-inria/gaussian-splatting/assets/564361/bf44370f-a2de-40ad-a5bb-a2cea37b132e
Such a scene might in fact benefit from modifications to the code! It could be quite easily done, if you're handy with Python: detect the min and max point coordinates in 3D, create like ~100k-1M points with random color on a uniform sphere that is like 10x the radius of the scene around its center, and add it to the points that are being loaded from COLMAP. If this sounds too difficult, maybe i could drop some code that does something like this.
Thank you for the suggestion! Could you please provide more details on how to implement this modification? Also, would you mind sharing the code that you mentioned? That would be very helpful.
Hi, I implemented it just now: https://github.com/yzslab/gaussian-splatting-lightning/blob/e258e08dbd058c538ffb751cb296499940456ed2/internal/dataset.py#L232
2023-10-16.12-20-54.mp4
Good job! Has this been helpful to you in rendering the background?
Such a scene might in fact benefit from modifications to the code! It could be quite easily done, if you're handy with Python: detect the min and max point coordinates in 3D, create like ~100k-1M points with random color on a uniform sphere that is like 10x the radius of the scene around its center, and add it to the points that are being loaded from COLMAP. If this sounds too difficult, maybe i could drop some code that does something like this.
Thank you for the suggestion! Could you please provide more details on how to implement this modification? Also, would you mind sharing the code that you mentioned? That would be very helpful.
Hi, I implemented it just now: https://github.com/yzslab/gaussian-splatting-lightning/blob/e258e08dbd058c538ffb751cb296499940456ed2/internal/dataset.py#L232 2023-10-16.12-20-54.mp4
Good job! Has this been helpful to you in rendering the background?
I have not done too much experiments. I get a little bit PSNR increment. But the points on background sphere are prone to be pruned. The final training result only preserve very few background points.
We prune points that are too large based on an arbitrary heuristic. Try to increase the threshold or completely remove the pruning of big points.
On Mon, Oct 16, 2023, 07:30 Zhensheng Yuan @.***> wrote:
Such a scene might in fact benefit from modifications to the code! It could be quite easily done, if you're handy with Python: detect the min and max point coordinates in 3D, create like ~100k-1M points with random color on a uniform sphere that is like 10x the radius of the scene around its center, and add it to the points that are being loaded from COLMAP. If this sounds too difficult, maybe i could drop some code that does something like this.
Thank you for the suggestion! Could you please provide more details on how to implement this modification? Also, would you mind sharing the code that you mentioned? That would be very helpful.
Hi, I implemented it just now: https://github.com/yzslab/gaussian-splatting-lightning/blob/e258e08dbd058c538ffb751cb296499940456ed2/internal/dataset.py#L232 2023-10-16.12-20-54.mp4
Good job! Has this been helpful to you in rendering the background?
I have not done too much experiments. I get a little bit PSNR increment. But the points on background sphere are prone to be pruned. The final training result only preserve very few background points.
— Reply to this email directly, view it on GitHub https://github.com/graphdeco-inria/gaussian-splatting/issues/300#issuecomment-1763756400, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACGXXYNY22XKQAL5HYDB4WDX7TA6NANCNFSM6AAAAAA5Z5VMJQ . You are receiving this because you are subscribed to this thread.Message ID: @.***>
We prune points that are too large based on an arbitrary heuristic. Try to increase the threshold or completely remove the pruning of big points. … On Mon, Oct 16, 2023, 07:30 Zhensheng Yuan @.> wrote: Such a scene might in fact benefit from modifications to the code! It could be quite easily done, if you're handy with Python: detect the min and max point coordinates in 3D, create like ~100k-1M points with random color on a uniform sphere that is like 10x the radius of the scene around its center, and add it to the points that are being loaded from COLMAP. If this sounds too difficult, maybe i could drop some code that does something like this. Thank you for the suggestion! Could you please provide more details on how to implement this modification? Also, would you mind sharing the code that you mentioned? That would be very helpful. Hi, I implemented it just now: https://github.com/yzslab/gaussian-splatting-lightning/blob/e258e08dbd058c538ffb751cb296499940456ed2/internal/dataset.py#L232 2023-10-16.12-20-54.mp4 Good job! Has this been helpful to you in rendering the background? I have not done too much experiments. I get a little bit PSNR increment. But the points on background sphere are prone to be pruned. The final training result only preserve very few background points. — Reply to this email directly, view it on GitHub <#300 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACGXXYNY22XKQAL5HYDB4WDX7TA6NANCNFSM6AAAAAA5Z5VMJQ . You are receiving this because you are subscribed to this thread.Message ID: @.>
Thanks for your advice.
Seems like it prefer to densify nearby, rather than use background sphere points. Increase the scene extent can avoid this behavior, but the quality decreased.
We are aware that there are cases an scenes that our hyper parameters and heuristics don't work. But there is not much we can do about it in the context of a GitHub issue.
On Mon, Oct 16, 2023, 09:48 Zhensheng Yuan @.***> wrote:
Seems like it prefer to densify nearby, rather than use background sphere points. Increase the scene extent can avoid this behavior, but the quality decreased.
— Reply to this email directly, view it on GitHub https://github.com/graphdeco-inria/gaussian-splatting/issues/300#issuecomment-1763914534, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACGXXYJBIZOLWZDKQDPQTUDX7TRDXANCNFSM6AAAAAA5Z5VMJQ . You are receiving this because you commented.Message ID: @.***>
Comparation videos:
| with background sphere | baseline |
|---|---|
The background sphere can reduce floaters. But some part of the near objects may be representated by background sphere, especially those with view-dependent effects.
The evaluation metris:
The Train_baseline is the experiment without background sphere.
Seems like it prefer to densify nearby, rather than use background sphere points. Increase the scene extent can avoid this behavior, but the quality decreased.
Hi! Thanks for your implementation! What do you mean by "increase the scene extent" here?
Seems like it prefer to densify nearby, rather than use background sphere points. Increase the scene extent can avoid this behavior, but the quality decreased.
Hi! Thanks for your implementation! What do you mean by "increase the scene extent" here?
The gaussians are pruned according to the scene extent. You need to avoid background being pruned here: https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/train.py#L120
Besides, you can not simply increase the scene extent only, because the learning rates are scaled according to the scene extent:
https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/scene/gaussian_model.py#L125 https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/scene/gaussian_model.py#L164-L165
Seems like it prefer to densify nearby, rather than use background sphere points. Increase the scene extent can avoid this behavior, but the quality decreased.
Hi! Thanks for your implementation! What do you mean by "increase the scene extent" here?
The gaussians are pruned according to the scene extent. You need to avoid background being pruned here:
https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/train.py#L120
Besides, you can not simply increase the scene extent only, because the learning rates are scaled according to the scene extent:
https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/scene/gaussian_model.py#L125
https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/scene/gaussian_model.py#L164-L165
Thank you very much for the explanation! I'll try it later!
In addition to this, I also noticed that the far plane of the projection matrix is hardcoded to 100: https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/scene/cameras.py#L48
Do you think it might be helpful to work with background objects if I increase the z_far?
Seems like it prefer to densify nearby, rather than use background sphere points. Increase the scene extent can avoid this behavior, but the quality decreased.
Hi! Thanks for your implementation! What do you mean by "increase the scene extent" here?
The gaussians are pruned according to the scene extent. You need to avoid background being pruned here: https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/train.py#L120
Besides, you can not simply increase the scene extent only, because the learning rates are scaled according to the scene extent: https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/scene/gaussian_model.py#L125
https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/scene/gaussian_model.py#L164-L165
Thank you very much for the explanation! I'll try it later!
In addition to this, I also noticed that the far plane of the projection matrix is hardcoded to 100:
https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/scene/cameras.py#L48
Do you think it might be helpful to work with background objects if I increase the z_far?
The zfar and znear are ignored in diff-gaussian-rasterization.
The
zfarandznearare ignored in diff-gaussian-rasterization.
https://github.com/graphdeco-inria/diff-gaussian-rasterization/blob/59f5f77e3ddbac3ed9db93ec2cfe99ed6c5d121d/cuda_rasterizer/auxiliary.h#L139-L164
The
zfarandznearare ignored in diff-gaussian-rasterization.https://github.com/graphdeco-inria/diff-gaussian-rasterization/blob/59f5f77e3ddbac3ed9db93ec2cfe99ed6c5d121d/cuda_rasterizer/auxiliary.h#L139-L164
Ah I see, thank you very much! The far plane has nothing to do with rasterization, however, it affects how the background objects are "contracted" and how they are projected on the image plane, do I understand correctly?
The far plane has nothing to do with rasterization, however, it affects how the background objects are "contracted" and how they are projected on the image plane, do I understand correctly?
You can get how the zfar and znear work here: https://www.songho.ca/opengl/gl_projectionmatrix.html
The far plane has nothing to do with rasterization, however, it affects how the background objects are "contracted" and how they are projected on the image plane, do I understand correctly?
You can get how the
zfarandznearwork here: https://www.songho.ca/opengl/gl_projectionmatrix.html
Thanks!
Seems like it prefer to densify nearby, rather than use background sphere points. Increase the scene extent can avoid this behavior, but the quality decreased.
Hi! Thanks for your implementation! What do you mean by "increase the scene extent" here?
The gaussians are pruned according to the scene extent. You need to avoid background being pruned here: https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/train.py#L120
Besides, you can not simply increase the scene extent only, because the learning rates are scaled according to the scene extent: https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/scene/gaussian_model.py#L125
https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/scene/gaussian_model.py#L164-L165
Thank you very much for the explanation! I'll try it later! In addition to this, I also noticed that the far plane of the projection matrix is hardcoded to 100: https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/scene/cameras.py#L48
Do you think it might be helpful to work with background objects if I increase the z_far?
The
zfarandznearare ignored in diff-gaussian-rasterization.
@yzslab ~~Hi, it seems that the gaussians are pruned not according to the scene_extent? I can only find the related code that clones and splits according to the scene_extent. Did I miss anything? Hopefully can get your opinions, thanks!~~ Just find the related code..sorry to bother.
Seems like it prefer to densify nearby, rather than use background sphere points. Increase the scene extent can avoid this behavior, but the quality decreased.
Hi! Thanks for your implementation! What do you mean by "increase the scene extent" here?
The gaussians are pruned according to the scene extent. You need to avoid background being pruned here:
https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/train.py#L120
Besides, you can not simply increase the scene extent only, because the learning rates are scaled according to the scene extent:
https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/scene/gaussian_model.py#L125
https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/scene/gaussian_model.py#L164-L165
Hello, sorry to bother you. I wonder if you have figured out a proper approach to adding a sky sphere? I encountered a similar situation, where more than 1/4 of my images are sky and it introduces lots of artifacts (I think so)