gaussian-splatting icon indicating copy to clipboard operation
gaussian-splatting copied to clipboard

difference between "padded_grad" and "torch.norm(grads, dim=-1)" when perform densification

Open NeutrinoLiu opened this issue 1 year ago • 2 comments

hi when i compare the condition of split densification and clone densification, i found the definition of "too large gradient" is slightly between the two.

For split, the mask is generated by https://github.com/graphdeco-inria/gaussian-splatting/blob/472689c0dc70417448fb451bf529ae532d32c095/scene/gaussian_model.py#L354 while for clone, the mask is generated by https://github.com/graphdeco-inria/gaussian-splatting/blob/472689c0dc70417448fb451bf529ae532d32c095/scene/gaussian_model.py#L376 I am not quite sure about the functionality of "padded_grad" here. considering the arguments passed to this two functions are identical, is there any difference between this two method to filter out large gradient gaussians? Thx

NeutrinoLiu avatar Apr 20 '24 05:04 NeutrinoLiu

Same question => Solved

The shape of tensor grads is (N, 1) where N is the total number of points. So torch.norm(grads, dim=-1) doesn't change the gradient values

KEVIN09876 avatar Apr 23 '24 06:04 KEVIN09876

Same question => Solved

The shape of tensor grads is (N, 1) where N is the total number of points. So torch.norm(grads, dim=-1) doesn't change the gradient values

confusing that they use two different representation for the same functionality, but anyway, thx.

NeutrinoLiu avatar Apr 23 '24 14:04 NeutrinoLiu