context-encoder
context-encoder copied to clipboard
Loss_D goes to 0
Hi,
First of all, great and interesting work. Congratulations! I have recently started my Ph.D. and your paper was one of the few interesting and helpful baselines for me to grab some knowledge on the inpainting topic.
I tried Context Encoder (CE) on my lab's locally obtained dataset (~0.5 million images) and it outperformed many other inpainting methods that we experimented with.
But recently, I have been trying to use CE for a rather larger dataset (over 4 million images). While training, after the second or third epoch, loss of discriminator starts approaching to 0 which apparently means the generator network is not learning anymore
Can I get your expert opinion on what may be the causes? and, what parameters would be suitable to train the network for such a large dataset?
Below is the screenshot showing learning progress: