Is it possible to train the encoder and generator together?
Hello, dear authors, thank you for your great work. I have read several of your papers and know that your team has very good insight on generative models. So I would like to ask several questions:
-
I think this work and Pixel2style2pixel are somewhat similar in proposing a hierarchical stylegan encoder, however, it's hard for both method to get precisely reconstructed images. Though results looks similar to original images, they are very different pixel-wise, and the SSIM and PSNR are relatively low (compared to other reconstuction tasks like SR, Deblur, De-rain, etc). What do you think is the reason? Is there any method to impore the reconstruction quality?
-
VAE-GAN combines VAE and GAN, is there any chance to train a stylegan and stylegan encoder in this way, or some similar framework?