swapping-autoencoder-pytorch
swapping-autoencoder-pytorch copied to clipboard
Second forward pass in the training loop
Hello author, thanks for trying to recreate the paper's work. I have the following question.
In the training code, there are two call to the forward pass (before updating gradients to the Discriminator and the Generator). I am wondering why is it needed. The following block is called 2 times.
`structure1, texture1 = encoder(real_img1)
_, texture2 = encoder(real_img2)
fake_img1 = generator(structure1, texture1)
fake_img2 = generator(structure1, texture2)`
Even if the encoder/decoder's gradient update is disabled, that doesn't change the value of the forward pass. Am I missing something?
You can may use retain_graph=True
and reuse first graph constructed. I remember efficiency was similar.
Exactly. Thanks.