4DGaussians
4DGaussians copied to clipboard
Loss become Nan after some time training
And the rendering image is white background
Wow... I also found same problem during optimization. Initialiy I think it's error on my training machine.
Most of cases happen on scenes have more background points such as flame_salmon_1
and coffee_martini
of the Neu3D datasets. I think it may be the nemerical overflow during training. Do you have any ideas?
I hope we can solve it together if you have time :)
I also encountered this problem when training on my own scene, the loss may become nan after several iterations in fine stage. Besides, there are also some cases that "Runtime Error: numel: integer multiplication overflow" happens during fine stage training. I am not sure if it is caused by similar reason.
I meet the same problem, on a colmap-format dataset.
The PSNR suddenly decrease into an unexpected value(4.28), while the number of point cloud also decreases .
I guess that maybe the scene's bounding box is so large, and causes the error when producing the backpropagation in the Gaussian deformation field network.
I guess that maybe the scene's bounding box is so large, and causes the error when producing the backpropagation in the Gaussian deformation field network.
Is there any solution to solve this problem?
In my test, set no_dr=True
and no_ds=True
(disable the deformation of rotation and scaling) will decrease the happening of the problem.
In my test, set
no_dr=True
andno_ds=True
(disable the deformation of rotation and scaling) will decrease the happening of the problem.
However, it seems that performance might be significantly affected by this approach. Are there any other solutions?
Why do I always run again because the loss is nan? I can't even finish running it once.