stylegan2-pytorch
stylegan2-pytorch copied to clipboard
Why flip weight and add 1 to bias when loading tensorflow pretrained model
Thanks to the great work. I noticed that when loading the tensorflow pretrained model you flip "Conv0_up.conv.weight" and add 1 to the "conv.modulation.bias. Is this because of the difference between this implementation and the official one and maybe cause the performance drop?
The approach of official implementations is slightly different from this repository. I choosed to initialize bias 1, official implementation uses bias initialized 0 and use + 1 for applying biases. And flipped convolution weights. But as these weights are all learnable, it should be equivalent between implementations.