GP-GAN
GP-GAN copied to clipboard
loss is nan
Hello author
When I was training, the loss was nan. After debug, I found that all the variables changed to 0 after the first convolutional layer. After the second convolutional layer, some data number became particularly large or small. In the subsequent convolutional layers, some data number would change to 0 or nan.
After remove batchnorm, set the learning rate to 0 and check W of convolutional layers, still not solve this problem. I really need your help. Thank you very much
- Do you run on your own data?
- Can you run inference correctly?
- Is the version of chainer match? I think it's likely that the updated chainer is different that the old one.