style-based-gan-pytorch
style-based-gan-pytorch copied to clipboard
image size and the differences with official implementation
thanks for this work! But i still have two question:
- the number of styledconvblock is 6, so the output image size is no more than 128, am i right? can this extend to 512 image or even 1024 image?
- is there any differences between this implementation and the official code(or paper)?
- You can add more layers and extend model to higher resolutions.
- I think I matched almost all details in the paper. I didn't checked all the details of implementations, but it looks like that both is almost similar. But some detail is slightly different - I used native bilinear interpolation, whereas official implementation uses binomial filter. And learning rates - this implementation uses 1e-3 (same as progressive gan paper.) and official implementation uses 1.5e-3.
thanks! I will try higher resolution.
I think I've found a difference with the official implementation.
In the StyledConvBlock, the noise is injected after to the AdaIN operation, whereas the official implementation does it just after the conv, before the AdaIN operation. Could this be the reason for the difference in results?

I'm trying to get the parameters from the official pretrained model (in TensorFlow) and put it in your network to see if I get the same results. I'll edit this point in my forked repository and get back here if I notice any more differences.
It's my mistake. Thanks! Changed in 24896bb
I think I found another difference too. In https://github.com/rosinality/style-based-gan-pytorch/blob/master/model.py#L266, you only apply to_rgb() when i == step, while in official implementation, they apply torgb in all blocks.
The same problem in Discriminator.
Hmm, but wouldn't lerp_clip makes model ignore previous torgbs?
Something I noticed was that here: https://github.com/rosinality/style-based-gan-pytorch/blob/master/model.py#L270
You are sending the upsampled activations through the previous steps toRGB because this line executes first: https://github.com/rosinality/style-based-gan-pytorch/blob/master/model.py#L259 and then you are interpolating.
Whereas in the official implementation, the activations of each step are run through the corresponding torgb layer and then the resulting output image is upsampled afterwards to do the interpolation https://github.com/NVlabs/stylegan/blob/master/training/networks_stylegan.py#L542
Was this intentional?
Both will almost similar. But using torgbs before upsampling will be more efficient as it reduce channels first.
@rosinality @aparafita guys, am I correct that this before/after changes do not require retrain? Feels like this impacts only inference
Unfortunately this will require retrain as noise term will interacts with adaptive instance norm.
@voa18105 The function will be affected for sure. The AdaIN changes the scale of each channel, so if the noise comes before it, the scale of the noise is also affected. In that sense, the official implementation makes sense and the noise should be injected before the AdaIN, but it's hard to say how important it'd be to the overall result.
oh no, 3 days retrain... again...
In the official implementation, they use blur after the upscale conv.
But this repo does not use the upscale conv when upscaling the image.
https://github.com/rosinality/style-based-gan-pytorch/blob/24896bb6c080e9c0fb233c7b3647422d65d73dc3/model.py#L258-L261
Did I miss something here?
In the official implementation, they use blur after the upscale conv.
But this repo does not use the upscale conv when upscaling the image.
style-based-gan-pytorch/model.py
Lines 258 to 261 in 24896bb
if i > 0 and step > 0: upsample = F.interpolate(out, scale_factor=2, mode='bilinear', align_corners=False) # upsample = self.blur(upsample) out = conv(upsample, style_step, noise[i]) Did I miss something here?
bilinear upsampling is taking the place of the conv up + blur, since in pytorch the upscaling uses interpolate anyway the bilinear filtering on the way up is essentially the same as blur.
It is slightly different, but I changed it to be the exact same and it didn't make a noticeable difference qualitatively in FID score. The StyleGAN paper also mentions they tried bilinear upsampling and it made a small improvement, although I didn't see it in the code.
@mileslefttogo what I don't understand here is why conv up layer can be replaced as well, as one is trainable while another is not.
Official implementations use upscale -> conv -> blur, my implementation use upscale (bilinear) -> conv. So yes order is different. (upscale & blur works similarly to bilinear interpolation except of edges as @mileslefttogo said. I used bilinear interpolations due to speed problems.) I don't know it will make much differences. But maybe you can try to change ordering.
Now I got it. Thanks
@rosinality @aparafita @voa18105 Hi guys, do you guys get the new model for the fixed-bug version (commit 24896bb). I will appreciate if any of you can provide me a more advanced pre-trained model on ffhq. Further question, do you guys get a model on generating high-resolution images.
@Cold-Winter as I understand, this implementation does not suppose HQ. Also, I dont have ffhq.
@Cold-Winter I don't know I can get enough computing resources to train high resolution model in reasonable time...But I will revise codes to allow train model in higher resolutions.