encoder4editing
encoder4editing copied to clipboard
inversion result is not good ,but the sim is high.
hello, i trained on e4e my own dataset and pretrained stylegan2 model,but the result looks totally different with source img,can you give me some suggestion.
Hi @zeta7337! From a quick glance it seems like the images might not be aligned according to FFHQ's alignment method. In case you are using the pretrained FFHQ StyleGAN2 this might be the cuase for your results.
To better understand the experiment settings, could you please provide me with answers to the following:
-
Are you using the official StlyeGAN2 FFHQ model? If so, is your training data aligned according to the FFHQ face alignment? (you can look into the inference script or notebook to see how to align your train and test datasets).
-
In case you trained the StyleGAN on your custom dataset, could you provide some results of generated StyleGAN images?
-
In case you trained the StyleGAN on your custom (unaligned?) dataset, did you compute the identity loss on the entire image? or just on the crop of the face?
Hope we can fix the training results! Best, Omer
Hi @zeta7337! From a quick glance it seems like the images might not be aligned according to FFHQ's alignment method. In case you are using the pretrained FFHQ StyleGAN2 this might be the cuase for your results.
To better understand the experiment settings, could you please provide me with answers to the following:
1. Are you using the official StlyeGAN2 FFHQ model? If so, is your training data aligned according to the FFHQ face alignment? (you can look into the inference script or notebook to see how to align your train and test datasets). 2. In case you trained the StyleGAN on your custom dataset, could you provide some results of generated StyleGAN images? 3. In case you trained the StyleGAN on your custom (unaligned?) dataset, did you compute the identity loss on the entire image? or just on the crop of the face?
Hope we can fix the training results! Best, Omer
thank you! you are right,the images are not aligned correctly. I have some other question. 1.should I train the stylegan generator and the e4eencoder with the same dataset? 2.I inverse an image of asian movie star with ffhq encoder ,the result is not that good, the inversion result does not looks like the source img.I think the encoder and generator does not familar with asian people,because most imags in ffhq are of europeans or americans.so if I use a much more bigger dataset that contain all kind of person to train the generator and e4eencoder ,should the inversion get better? 3.i need an encoder that can inverse image well, i dont need to edit the image .Is there suggestion for the training setting?
Hello, have you resolved the issue?
Was inversion issue for Asian people solved ?