stylegan2-pytorch icon indicating copy to clipboard operation
stylegan2-pytorch copied to clipboard

Converting to NVLabs stylegan2-pytorch or stylegan2-pytorch-ada

Open sarmientoj24 opened this issue 3 years ago • 18 comments

Is there a script for converting the weights to NVLabs stylegan2-pytorch or stylegan2-pytorch-ada?

sarmientoj24 avatar Dec 21 '21 15:12 sarmientoj24

Currently does not have it.

rosinality avatar Dec 22 '21 00:12 rosinality

How different is the Generator structure from the stylegan2-pytorch or stylegan2-pytorch-ada?

sarmientoj24 avatar Dec 22 '21 04:12 sarmientoj24

It should be same. I think it could be directly convertible if keys are matches.

rosinality avatar Dec 22 '21 10:12 rosinality

But the NVLabs SG2 has this SynthesisNetwork and MappingNetwork which can be seen here

        self.synthesis = SynthesisNetwork(w_dim=w_dim, img_resolution=img_resolution, img_channels=img_channels, **synthesis_kwargs)
        self.num_ws = self.synthesis.num_ws
        self.mapping = MappingNetwork(z_dim=z_dim, c_dim=c_dim, w_dim=w_dim, num_ws=self.num_ws, **mapping_kwargs)

Could you indicate which parts of your SG2 architecture is the SynthesisNetwork and MappingNetwork?

sarmientoj24 avatar Dec 22 '21 15:12 sarmientoj24

Can you advise me how do I convert the Generator part from your code to the NVLabs one?

sarmientoj24 avatar Dec 23 '21 17:12 sarmientoj24

MappingNetwork corresponds to Generator.style and SynthesisNetwork corresponds to the rest of the generator. You can match keys in order, and you can refer to convert_weight.py as official pytorch implementation is similar to the tensorflow implementations.

rosinality avatar Dec 24 '21 00:12 rosinality

I have already matched the MappingNetwork albeit it took me some time.

On the SynthesisNetwork, I can see this affine FC layer here

self.affine = FullyConnectedLayer(w_dim, in_channels, bias_init=1)

Do you know what this is?

To add, there is this difference:
Your implementation's one SynthesisLayer

'convs.1.conv.weight',
'convs.1.conv.modulation.weight',
'convs.1.conv.modulation.bias',
'convs.1.noise.weight',
'convs.1.activate.bias',
'noises.noise_1',

which is equivalent to theirs

'synthesis.b16.conv1.weight',
'synthesis.b16.conv1.noise_strength',
'synthesis.b16.conv1.bias',
'synthesis.b16.conv1.resample_filter',
'synthesis.b16.conv1.noise_const',
'synthesis.b16.conv1.affine.weight',
'synthesis.b16.conv1.affine.bias',

Any idea on the counterparts?

sarmientoj24 avatar Dec 29 '21 15:12 sarmientoj24

affine corresponds to modulation. Noise weight and noise will corresponds to noise strength and noise const.

rosinality avatar Dec 31 '21 04:12 rosinality

MappingNetwork corresponds to Generator.style and SynthesisNetwork corresponds to the rest of the generator. You can match keys in order, and you can refer to convert_weight.py as official pytorch implementation is similar to the tensorflow implementations.

What do G.synthesis.num_ws and G.synthesis.block_resolutions correspond to?

bayndrysf avatar Jan 22 '22 15:01 bayndrysf

@bayndrysf I think it is a constant that not required in this implementation.

rosinality avatar Jan 23 '22 01:01 rosinality

Hi @sarmientoj24! Did you manage to generate a script that can convert these checkpoints to this repo's architecture in the end? If so, would you be kind enouh to share it please :)?

james-oldfield avatar Feb 19 '22 15:02 james-oldfield

@james-oldfield Unfortunately, I went with a different approach but you can possibly do that. I just restructured the code to make it a bit similar to NVLabs's version where there are two networks and you can produce the W, W+, S latent space.

sarmientoj24 avatar Feb 19 '22 17:02 sarmientoj24

@rosinality Can you kindly help me out with some issues regarding porting stylegan2-ada weights There are some layers left over after converting layers as per your reference code.

Your model:

convs.0.conv.blur.kernel torch.Size([4, 4]) convs.2.conv.blur.kernel torch.Size([4, 4]) convs.4.conv.blur.kernel torch.Size([4, 4]) convs.6.conv.blur.kernel torch.Size([4, 4]) convs.8.conv.blur.kernel torch.Size([4, 4]) convs.10.conv.blur.kernel torch.Size([4, 4]) to_rgbs.0.upsample.kernel torch.Size([4, 4]) to_rgbs.1.upsample.kernel torch.Size([4, 4]) to_rgbs.2.upsample.kernel torch.Size([4, 4]) to_rgbs.3.upsample.kernel torch.Size([4, 4]) to_rgbs.4.upsample.kernel torch.Size([4, 4]) to_rgbs.5.upsample.kernel torch.Size([4, 4])


StyleGan-ada synthesis.b4.resample_filter torch.Size([4, 4]) synthesis.b4.conv1.resample_filter torch.Size([4, 4]) synthesis.b8.resample_filter torch.Size([4, 4]) synthesis.b8.conv0.resample_filter torch.Size([4, 4]) synthesis.b8.conv1.resample_filter torch.Size([4, 4]) synthesis.b16.resample_filter torch.Size([4, 4]) synthesis.b16.conv0.resample_filter torch.Size([4, 4]) synthesis.b16.conv1.resample_filter torch.Size([4, 4]) synthesis.b32.resample_filter torch.Size([4, 4]) synthesis.b32.conv0.resample_filter torch.Size([4, 4]) synthesis.b32.conv1.resample_filter torch.Size([4, 4]) synthesis.b64.resample_filter torch.Size([4, 4]) synthesis.b64.conv0.resample_filter torch.Size([4, 4]) synthesis.b64.conv1.resample_filter torch.Size([4, 4]) synthesis.b128.resample_filter torch.Size([4, 4]) synthesis.b128.conv0.resample_filter torch.Size([4, 4]) synthesis.b128.conv1.resample_filter torch.Size([4, 4]) synthesis.b256.resample_filter torch.Size([4, 4]) synthesis.b256.conv0.resample_filter torch.Size([4, 4]) synthesis.b256.conv1.resample_filter torch.Size([4, 4]) mapping.w_avg torch.Size([512])

I would be grateful if you could kindly help me in figuring out the right mapping :)

usmancheema89 avatar Oct 13 '22 09:10 usmancheema89

here's my code, which might come in handy for someone in the future convert_weights.txt

usmancheema89 avatar Oct 13 '22 09:10 usmancheema89

@rosinality or others, Anyone able to figure out how to convert stylegan-2-ada pytorch weights to @rosinality implementation weights?

garg-aayush avatar Jan 11 '23 12:01 garg-aayush

https://github.com/yuval-alaluf/stylegan3-editing has some resources to convert stylegan-3 to rosinality style generator. Stylegan-3 has ada support so that might be useful for you @garg-aayush

usmancheema89 avatar Jan 12 '23 00:01 usmancheema89

@usmancheema89 Actually, we found the following script https://github.com/rosinality/stylegan2-pytorch/issues/206#issuecomment-812273460 that allows you to convert official stylegan2-ada-pytorch weights to rosinality implementation.

I checked the script last night and it works great!.

Thanks

garg-aayush avatar Jan 12 '23 05:01 garg-aayush

@usmancheema89 Actually, we found the following script #206 (comment) that allows you to convert official stylegan2-ada-pytorch weights to rosinality implementation.

I checked the script last night and it works great!.

Thanks

hello, I found out that the transformation only contains the g_ema parameter, so what about the other parameters(g parameter d parameter) image image

zhanghongyong123456 avatar Jan 13 '23 02:01 zhanghongyong123456