stylegan2-pytorch
stylegan2-pytorch copied to clipboard
Can we use the . Pt model to generate images and the corresponding dlatents directly?
Hello, can I use the. Pt model to generate dlatent and then generate the images as the official implementation:
src_latents = np.stack(np.random.RandomState(seed).randn(Gs.input_shape[1]) for seed in src_seeds)
dst_latents = np.stack(np.random.RandomState(seed).randn(Gs.input_shape[1]) for seed in dst_seeds)
src_dlatents = Gs.components.mapping.run(src_latents, None) # [seed, layer, component]
dst_dlatents = Gs.components.mapping.run(dst_latents, None) # [seed, layer, component]
src_images = Gs.components.synthesis.run(src_dlatents, randomize_noise=False, **synthesis_kwargs)
dst_images = Gs.components.synthesis.run(dst_dlatents, randomize_noise=False, **synthesis_kwargs)
@wytcsuch sure, I can probably offer a similar interface
what is dst_latents
vs src_latents
?
@wytcsuch sure, I can probably offer a similar interface
what is
dst_latents
vssrc_latents
?
Maybe my problem is not clearly described. I hope that I can give Z randomly, and then convert Z to dlatent and finally generate the corresponding image: z- > dlalent - >image
z = np.stack(np.random.RandomState(seed).randn(Gs.input_shape[1]) for seed in src_seeds)
dlatents = Gs.components.mapping.run(z,None) # [seed, layer, component]
images = Gs.components.synthesis.run(dlatents, randomize_noise=False, **synthesis_kwargs)
You don't need to understand dst_latents and src_dlatents Specifically, they are both dlatents. I just need to record the dlatent and the corresponding image
@wytcsuch ok, it's much clearer to me now
yup, i'll provide you an API similar to the official repo soon
@wytcsuch https://github.com/lucidrains/stylegan2-pytorch#coding let me know if that works for you
@lucidrains thanks a lot,I'll try it. I think it should work
@lucidrains thanks a lot,I'll try it. I think it should work
Hello bro, does this encoder work?