convolutional-handwriting-gan icon indicating copy to clipboard operation
convolutional-handwriting-gan copied to clipboard

About gradient balancing

Open miranghimire opened this issue 3 years ago • 1 comments

There are three backward calls inside gardient balancing between generator loss & OCR loss:

  • https://github.com/amzn/convolutional-handwriting-gan/blob/f7daa5045a281be23c1d20c5b74f12ffbddf69f9/models/ScrabbleGAN_baseModel.py#L354
  • https://github.com/amzn/convolutional-handwriting-gan/blob/f7daa5045a281be23c1d20c5b74f12ffbddf69f9/models/ScrabbleGAN_baseModel.py#L368
  • https://github.com/amzn/convolutional-handwriting-gan/blob/f7daa5045a281be23c1d20c5b74f12ffbddf69f9/models/ScrabbleGAN_baseModel.py#L374

Won't these calls accumulate the gradients during the call of optimizer.step(); I thought our objective here was to simply compute the gardient balancing terms and multiply those to the loss or could you please give overview of what's going on here inside gradient balancing incase I misunderstood something?

miranghimire avatar Dec 23 '21 09:12 miranghimire

I just looked at the code, I think you're right and there should be self.netG.zero_grad() between the first and second backprop. The third one is performed without gradient accumulation just so that the graph won't be retained.

sharonFogel avatar Dec 26 '21 11:12 sharonFogel