DRIT-Tensorflow icon indicating copy to clipboard operation
DRIT-Tensorflow copied to clipboard

Content Loss

Open samanehAb opened this issue 6 years ago • 0 comments

I am using your implementation on my own data and I see the discriminator content loss decreases very quickly and after few epochs it becomes stable (very small values in the order of 3e-13). So it is not helpful after a while. On the other hand the G_A_content_loss and G_B_content_loss increase suddenly and become stable after a while. I checked this with different data I have available and the same thing happens for them. I have tested the original PyTorch implementation and it works fine on my data. The content loss doesn't vanish quickly and the quality of generated samples keeps improving. I was wondering if you observe a similar behavior and have any clue how to avoid it. Do you get similar results compared to the original implementation?

samanehAb avatar Sep 26 '18 07:09 samanehAb