swapping-autoencoder-pytorch icon indicating copy to clipboard operation
swapping-autoencoder-pytorch copied to clipboard

Second forward pass in the training loop

Open krips89 opened this issue 4 years ago • 2 comments

Hello author, thanks for trying to recreate the paper's work. I have the following question.

In the training code, there are two call to the forward pass (before updating gradients to the Discriminator and the Generator). I am wondering why is it needed. The following block is called 2 times.

    `structure1, texture1 = encoder(real_img1)
    _, texture2 = encoder(real_img2)

    fake_img1 = generator(structure1, texture1)
    fake_img2 = generator(structure1, texture2)`

Even if the encoder/decoder's gradient update is disabled, that doesn't change the value of the forward pass. Am I missing something?

krips89 avatar Oct 08 '20 17:10 krips89

You can may use retain_graph=True and reuse first graph constructed. I remember efficiency was similar.

rosinality avatar Oct 09 '20 01:10 rosinality

Exactly. Thanks.

krips89 avatar Oct 09 '20 11:10 krips89