stylegan2-pytorch
stylegan2-pytorch copied to clipboard
Is train loop memory-efficient?
Hi. I've found, that you unfreeze the whole GAN, and making steps only via specific optimizer (for generator and discriminator). But when you do loss.backward, gradients are computed for the WHOLE GAN, whereas for certain optimizer only their own gradients are needed. It causes additional memory uses and increased iteration time. Please correct me if I am wrong.
Totally not.