2d-gaussian-splatting
2d-gaussian-splatting copied to clipboard
reduce VRAM requirement
Hi,
I use a system with a single graphic card (VRAM 16GB) and high memory instead (because RAM expansion is always cheaper than VRAM expansion) Therefore, I had to lower the volume of VRAM and load the images into the memory.
Simply not loading the original_image to the cuda in class Camera, I got what I expected. There are no other problems, as in the training part, the loss function is already calculated with the following:
gt_image = torch.clamp(viewpoint.original_image.to("cuda"), 0.0, 1.0) l1_test += l1_loss(image, gt_image).mean().double()
Please merge if you find this useful. Thanks for your great work!
Thank you for your PR. Indeed this is important. However, I think it will cause increased training time? because we need loading the image into GPU every iteration. Perhaps, for large scales where there are thousands of images, a more smart data loader should be implemented. It think it would be great to leave for future development.
Hi, can you add an augment like data_device so that we can control the device to put the data.
I think this maybe useful when processing large amount images, or with --resolution 1.
Update: parameter --data_device is just used for this. No need to change the code now.
Thanks for the nice code, it helps a lot! :)
Ohhh, I see the data_device augment. Ready to merge!