instant-ngp icon indicating copy to clipboard operation
instant-ngp copied to clipboard

How to reduce "batch size" to save memory

Open HungNgoCT opened this issue 2 years ago • 2 comments

Hi there.

Thanks for interesting work.

I am using GPU RTX 2070 super with 8GB vRam. It can train small number of images (<~80). However, cannot apply for larger number of images, such as 100 images, because of out of GPU memory.

Also, I cannot increase resolution (such as 1080x1920) during training time for observation due to out of GPU memory too.

I understood that we can reduce "batch size", or chunk size... to save GPU memory. Any one can help me to understand the point in the code to reduce "batch size" like that? Also, Is there any way to save network, load and render out put later to save memory?

Highly appreciate for any one can help.

HungNgoCT avatar Sep 05 '22 02:09 HungNgoCT

I confronted with the same issue, anyone got idea?

vermouth599 avatar Sep 16 '22 07:09 vermouth599

Any update about the issue?

Frydesk avatar Nov 05 '22 18:11 Frydesk

refer to src/python_api.cu, if you use python script, you can try to add testbed.training_batch_size=1<<16 in script/run.py somewhere before training section, to reduce GPU memory. default setting may be 262144, 2^18 in my some case, reduce batch_size even train faster...

carlzhang94 avatar Aug 15 '23 08:08 carlzhang94