faceshifter
faceshifter copied to clipboard
Training with only CelebHQ dataset
Thanks for the great work!
When I try to train the AEI-Net with 30k images from celebHQ dataset using 6 P40 32G GPUs, I got the training curve as below:
All the other setting are set by default and the generated swap faces are also weird:
Should I continue training or any sugguestions? Thanks in advance!
Hi! Thanks for your compliments. Training with only CelebA-HQ is quite a dangerous choice. The dataset bias can be affected to total loss. In your loss graph, reconstruction loss is very high. Maybe you should try to increase the coefficient of reconstruction loss if you wanna train with only CelebA-HQ.
Be careful about overfitting.
Hi! Thanks for your compliments. Training with only CelebA-HQ is quite a dangerous choice. The dataset bias can be affected to total loss. In your loss graph, reconstruction loss is very high. Maybe you should try to increase the coefficient of reconstruction loss if you wanna train with only CelebA-HQ.
Be careful about overfitting.
Thank you so much for your reply!
By the way, what is the expected value of reconstruction loss of your well-trained model? If possible, could you share your loss graph as a reference? Did you set grad_clip to zero?
- reconstruction loss should be around 1e-3~1e-4 for a well-trained model. You can see the reconstructed example in my colab example which is presently added.
- No. I can't share the loss graph.
- Yes. I didn't clip the gradient. The grad_clip option to zero means no clip.
Could you please tell me how long does it take to finish the training?
Could you please tell me how long does it take to finish the training?
For celebHQ dataset only, it takes 2-3 hours for an epoch.
Could you please tell me how long does it take to finish the training?
For celebHQ dataset only, it takes 2-3 hours for an epoch.
can you benefit from multi-GPU training? does it really increase the training speed?
已经过去四年了,请问下你能分享下arcface.pth文件吗,链接已经失效了,感激不尽