stylegan2-pytorch icon indicating copy to clipboard operation
stylegan2-pytorch copied to clipboard

Is train loop memory-efficient?

Open GLivshits opened this issue 2 years ago • 1 comments

Hi. I've found, that you unfreeze the whole GAN, and making steps only via specific optimizer (for generator and discriminator). But when you do loss.backward, gradients are computed for the WHOLE GAN, whereas for certain optimizer only their own gradients are needed. It causes additional memory uses and increased iteration time. Please correct me if I am wrong.

GLivshits avatar Aug 12 '21 11:08 GLivshits

Totally not.

Cads182 avatar Sep 05 '21 00:09 Cads182