UI2I_via_StyleGAN2 icon indicating copy to clipboard operation
UI2I_via_StyleGAN2 copied to clipboard

Layer swap in gen_multi_style.py

Open crownk1997 opened this issue 4 years ago • 0 comments

Thank you for your amazing work. I am a little confused about the layer swap part in your implementation. It seems that you first pass the latent code into the base model and then extract the intermediate results for the target model as the following.

img1, swap_res = g_ema1([input_latent], input_is_latent=True, save_for_swap=True, swap_layer=args.swap_layer)

for i in range(args.stylenum):
    sample_z_style = torch.randn(1, 512, device=args.device)
    img_style, _ = g_ema2([input_latent], truncation=0.5, truncation_latent=mean_latent, swap=True, swap_layer=args.swap_layer,  swap_tensor=swap_res, multi_style=True, multi_style_latent=[sample_z_style])
    print(i)
    img_style_name = args.output + "_style_" + str(i) + ".png"
    img_style = make_image(img_style)
    out_style = Image.fromarray(img_style[0])
    out_style.save(img_style_name)```

Is it true that you are trying to keep the low level information such as shape and pose from original model and put the lightening and texture from the target model? 

crownk1997 avatar Jan 06 '21 05:01 crownk1997