polestar
polestar
> (gaussian_splatting) xxx@amax:/data/xxx/dreamgaussian-main$ python main.py --config configs/image.yaml input=data/zelda_rgba.png save_path=zelda_rgba [INFO] load image from data/zelda_rgba.png... Number of points at initialisation : 5000 [INFO] loading zero123... Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:00
> [@polestarss](https://github.com/polestarss) Our gradient accumulation is implemented using `accelerate`, so there's no need to worry—it works as expected. When we call `accelerator.backward(loss)`, the gradients are temporarily stored in GPU memory...
I suffered the same problem