UI2I_via_StyleGAN2
UI2I_via_StyleGAN2 copied to clipboard
Unsupervised image-to-image translation method via pre-trained StyleGAN2 network
How do you specify a picture for testing?
BUG: Line 104 and Line 107 should add parameter `input_is_latent=True` or the content and the reference cannot be properly used. ISSUE: Also truncation=0.5 can sometimes be too strict, making the...
https://github.com/HideUnderBush/UI2I_via_StyleGAN2/blob/bd4cd6af326f22f55c58b9b3886d1a5bbdb7460f/closed_form_factorization.py#L17 I’ve been digging through GitHub for help on g_ema tweaking from generator I have this ticket - https://github.com/danielroich/PTI/issues/26 The maths is a bit beyond me / but I suspect...
Hi, @HideUnderBush! Thanks for you amazing works! I try to reimplement the face2anime experiments on Danbooru Datasets. However, I face some confusions, could you give me some advice? Step 1:...
Thanks for your magnificent research! And I wonder if I can get your Anime dataset since all of my dataset can not give me the reasonable result.
Thank you for your amazing work. I am a little confused about the layer swap part in your implementation. It seems that you first pass the latent code into the...
Hi @HideUnderBush, after downloading the 550000.pt and try to convert the image that you provide, I got a result like below 
colab
Hi, can you please add a google colab?
hi, Appreciate for sharing the implementation! There is a question that the style code of the specified reference is not used in gen_ref.py. Noise is used to generate ref and...
Thanks for your excellent work, but it seems that when finetuning the model on a new domain data, the mapping net(8*mlp) was not frozen, which conflicts with your papers, though...