Changho Choi

Results 10 comments of Changho Choi

Hi, Thank you for the compliments. For now, we don't have a plan to release the final pre-trained model. By the way, we will make API service in close future...

Hi! Thanks for your compliments. Training with only CelebA-HQ is quite a dangerous choice. The dataset bias can be affected to total loss. In your loss graph, reconstruction loss is...

1. reconstruction loss should be around 1e-3~1e-4 for a well-trained model. You can see the reconstructed example in my colab example which is presently added. 2. No. I can't share...

Hi! You did very fast training! 1. Yes, I used full-set dataset. I don't know about IJB-C dataset. The distribution of dataset can influence to your model. 2. In the...

- I trained with 32 batch size, it is the same as the paper. (Two V100 32G GPUs, 16 batch size for each) - Training GAN is very unstable. If...

@Daisy-Zhang @suzie26 @tyrink @akafen @lefsiva @chinasilva @Seanseattle @Poloangelo @ZhiluDing @princessmittens @DeliaJIAMIN @tamnguyenvan @cwalt2014 Check out [HifiFace](https://github.com/mindslab-ai/hififace), our implementation of a more recent face-swapping model with the pre-trained model.

Hi! Thank you for your interest! 1. Are you talking about [this issue](https://github.com/johannwyh/HifiFace/issues/3#issuecomment-917888760) on the official project Github? Honestly, I didn’t know this existed. I decided on some details that...

@WhiteSigility Thanks for all your works! First of all, I am very glad about there is someone really tried my code. This will be very helpful to me and other...

@WhiteSigility The input image size should be a right rectangle. You should pre-process with 3DDFA_v2's landmarks, just like [FFHQ](https://github.com/NVlabs/ffhq-dataset) did.

From my experience, using 256x256 should be fine. Also, the Deep3dFaceRecon repo mentioned that they applied a number of [augmentations during training](https://github.com/sicxu/Deep3DFaceRecon_pytorch#-training-configuration). So I thought Deep3dFaceRecon should be robust enough...