gaussian-splatting
gaussian-splatting copied to clipboard
Training Time Breakdown
In the paper, the author mentioned that "The majority (∼80%) of our training time is spent in Python code, since...". However, I did a runtime breakdown on training of 3DGS using torch.cuda.Event(), and it turns out that most of the time is spent on the backward propagation(which is implemented by CUDA), can anyone please explain if there is a misunderstanding?