nerfacc icon indicating copy to clipboard operation
nerfacc copied to clipboard

Support automatic mixed precision (AMP) training?

Open iYuqinL opened this issue 1 year ago • 3 comments

I try to apply AMP training with nerfacc, but I found that result in much worse rendering results: Compared to float32 training, there is a 3 point drop in PSNR.

Of course, I fix a data type issue when AMP training.

Selection_394

I wonder if you have plans to support AMP training

iYuqinL avatar Nov 11 '22 04:11 iYuqinL

Hi I'm not sure how much benefit I can get from AMP training so currently I'm not quite motivated to support it.

Also one big reason is that I'm not quire familiar with how torch amp works under the hood, so not sure where could cause issue for amp.

But happy to discuss!

liruilong940607 avatar Nov 11 '22 14:11 liruilong940607

The biggest benefit of AMP training is faster training (speedup about 1x).

I am quite new to pytorch cuda extension, and don't know if it is because the autoscaling not work for the cuda implementation. I will take some time to learn about the pytorch extension and AMP.

Thank you for your wonderful work.

iYuqinL avatar Nov 11 '22 14:11 iYuqinL

Hi, I also think this could help speed up 2X, or double effective batch size of https://github.com/threestudio-project/threestudio/issues/138 text->3D... is there any plan to support this? Otherwise I'll try to find someone who might be able to tackle it.

claforte avatar Jun 15 '23 19:06 claforte