gaussian-splatting icon indicating copy to clipboard operation
gaussian-splatting copied to clipboard

MemoryError

Open ideszscu opened this issue 10 months ago • 5 comments

This is what I get when I try to optimize 308 pictures. I don´t know if they are to much, or should I scale it before or I don't have enough space disk... any idea? Thanks

Optimizing
Output folder: ./output/cec3bb9c-6 [24/10 09:40:09]
Tensorboard not available: not logging progress [24/10 09:40:09]
Reading camera 308/308 [24/10 09:40:11]
Loading Training Cameras [24/10 09:40:11]
[ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K.
 If this is not desired, please explicitly specify '--resolution/-r' as 1 [24/10 09:40:11]
Traceback (most recent call last):
  File "train.py", line 216, in <module>
    training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from)
  File "train.py", line 35, in training
    scene = Scene(dataset, gaussians)
  File "C:\Users\Usuario\pinokio\api\gaussian-splatting-Windows.git\scene\__init__.py", line 73, in __init__
    self.train_cameras[resolution_scale] = cameraList_from_camInfos(scene_info.train_cameras, resolution_scale, args)
  File "C:\Users\Usuario\pinokio\api\gaussian-splatting-Windows.git\utils\camera_utils.py", line 58, in cameraList_from_camInfos
    camera_list.append(loadCam(args, id, c, resolution_scale))
  File "C:\Users\Usuario\pinokio\api\gaussian-splatting-Windows.git\utils\camera_utils.py", line 41, in loadCam
    resized_image_rgb = PILtoTorch(cam_info.image, resolution)
  File "C:\Users\Usuario\pinokio\api\gaussian-splatting-Windows.git\utils\general_utils.py", line 22, in PILtoTorch
    resized_image_PIL = pil_image.resize(resolution)
  File "C:\Users\Usuario\pinokio\bin\miniconda\envs\gaussian-splatting\lib\site-packages\PIL\Image.py", line 1943, in resize
    return self._new(self.im.resize(size, resample, box))
MemoryError

ideszscu avatar Oct 24 '23 07:10 ideszscu

same problem.

SegaMega99 avatar Nov 17 '23 15:11 SegaMega99

Same problem here. Anything?

ltm7725 avatar Nov 30 '23 23:11 ltm7725

I am also getting the same error on a high-spec cloud computer, but I am able to generate a successful splat on my laptop that has lower specs. Any help appreciated.

File "C:\ProgramData\Anaconda3\envs\gaussian_splatting\lib\site-packages\PIL\ImageFile.py", line 283, in load_prepare
    self.im = Image.core.new(self.mode, self.size)
MemoryError

Error is happening on a vagon cloud computer with the following specs:

  • Windows Server 2022
  • 4 cores
  • 16GB RAM
  • 24GB GPU A10G Tensor Core GPUs

But, I can successfully run python train.py -s data\my_project -r 8 on my laptop with the following specs:

  • Windows 11 Home
  • Intel Core i7
  • 16GB RAM
  • 8GB GPU NVIDIA GeForce RTX 3070 Laptop

The vagon cloud computer should have plenty of resources. Any idea why I would be getting a MemoryError?

Thanks, Barry

Update: I suspect this is due to a lack of virtual memory on the cloud computer. When train.py resizes the source images, it causes a huge spike in system RAM. Could the script be modified to resize the images first in batches, and write them to disk, before starting training?

barrykeenan avatar Dec 10 '23 23:12 barrykeenan

Related question about quality - I have 483 input images of resolution 5332 x 3522, so when running train.py I get the warning:

[ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K.

If I rescale the input images first to 1600 wide, rather than with the train.py -r parameter, will there be any loss in quality of the resulting splat? i.e. does train.py use any extra information from the original size images?

barrykeenanresn avatar Dec 11 '23 04:12 barrykeenanresn

same problem here.

WeihongPan avatar Apr 26 '24 02:04 WeihongPan