SC-GS
SC-GS copied to clipboard
RuntimeError In pre-7500 steps.
Traceback (most recent call last):
File "/home/cc/3dgs/SC-GS/train_gui.py", line 1886, in
Hi, Did you change any code? It seems that the initialization of Gaussians is incorrect. Because the features_dc and features_rest are not of the same length. Could you please check the initialization of both two? I guess one of them is an empty 0-size tensor.
Thank you for your quick response! I do not anything code, I just apply it to myself dataset. Also, when I debug this error, I found that the dim of features_dc and features_rest will change from [x, 0, 3] to [1, x, 1, 3] where x denote the node. Therefore, it will raise the "Sizes of tensors" error.
In the initial phase, torch.cat((features_dc, features_rest), dim=1) == [node, 0, 3].cat([node, 1, 3]) dim=1 However, after "self.iterations_node_sampling = 7500", it will become [1, node, 0, 3].cat(1, [node, 1, 3]), dim=1
I reproduce this error, and print features_dc and features_rest, as shown below.
"Initialization with all pcl. Need to reset the optimizer. [01/04 21:58:38] Initialize Learnable Gaussians for Nodes with Point Clouds! [01/04 21:58:38] Control node initialized with 16 from 16 points. [01/04 21:58:38] torch.Size([204, 1, 3]) ------------ torch.Size([204, 15, 3]) [01/04 21:58:38] (7499 iter) ================================ [01/04 21:58:38] torch.Size([1, 16, 1, 3]) ------------ torch.Size([1, 16, 0, 3]) [01/04 21:58:38] (7500 iter) "
I doubt that the problem is that the initialized Gaussians are not aligned with the true scene content. That's why features_dc becomes zero-shape. On D-NeRF datasets or any other self-captured datasets that COLMAP point clouds are correct, such an error would not be raised.
I suggest you try the solution here: https://github.com/yihua7/SC-GS/issues/12#issuecomment-1980336869. By keeping all points and converting them into Gaussians at the initialization step, the extinction of Gaussians at the first stage may be solved.
If the above method can not solve your problem, you can try --random_init_deform_gs
to initialize Gaussians rather than using COLMAP point clouds. In this way, the initial Gaussians will be uniformly sampled from the cube space from -1 to 1. You can change the code here: https://github.com/yihua7/SC-GS/blob/26cd57d09598b2f5d951029808a5ac9f0ff4f626/train_gui.py#L160 to broaden or shallow the size of initial cube.
However, I strongly doubt that a dynamic Gaussian can be trained on your data since inaccurate COLMAP point cloud means inaccurate camera poses. Anyway, you can try the above solutions and hope this information helps! :)
Thanks for you reply! I will try it again, and report this problem latter.
Thanks for you reply! I will try it again, and report this problem latter.
Hi, I have the same issue. If you've found a solution, could you share some suggestions?( I use the dataset of NeRF-DS)