style-based-gan-pytorch
style-based-gan-pytorch copied to clipboard
Pretrained model?
Can you / will you upload any pretrained model to compare results, please?
https://drive.google.com/file/d/1zmVRIXk8HHddLmibKcEYpT0gixUfPlSp/view?usp=sharing
You can use this. 600k checkpoint of generator.
Awesome! Thank you, man! I will tell you in a couple of days (hopefully) if I've managed to reproduce your results (reach more or less same quality)
Awesome! Thank you, man! I will tell you in a couple of days (hopefully) if I've managed to reproduce your results (reach more or less same quality)
Did you use pretrained model??
@Johnson-yue Yes, it works good
@rosinality I've reached more or less same result. Did you try CelebA-HQ? How do you think if I will increase number of layer - will it generate HQ images?
@voa18105 I didn't tried, but I think it will work - network architecture is almost same. But you will also need to modify train script, as I hard coded things a lot. :/
@rosinality thanks, I think I am gonna try it nearest days
Hi, Would it be possible to make the discriminator weights available too? I would like to be able to fine tune the model. Thanks so much :)
@anuragranj https://drive.google.com/file/d/1SFn7GygaLYhOobNQH_eqICcQETDcBC0X/view I think you can use this. Trained using FFHQ, 140k iter, 256px.
Thanks a lot @rosinality ! :)
@anuragranj https://drive.google.com/file/d/1SFn7GygaLYhOobNQH_eqICcQETDcBC0X/view I think you can use this. Trained using FFHQ, 140k iter, 256px.
I compared this weigts and weits that you gave in the issue #13 Are they the same? As far as I understand they are different both in quality and somehow in the architecture (maybe I'm wrong about architecture, but somewhy I get an error in to_rgb dimensions in my wrapper, but maybe it's my problem).
In the #13 you stated that the checkpoint is made on the same 140k iterations. But comparing the same latent vector images are similar but different in quality:
from the #13

from current issue:

Maybe you occasionally mixed up train iteration?
@PgLoLo https://drive.google.com/file/d/1SFn7GygaLYhOobNQH_eqICcQETDcBC0X/view is generator/discriminator checkpoints, and https://drive.google.com/file/d/1TVdUGOcMRVTVaxLhmh2qVmPgWIlwE0if/view is running average of generator weights. So they are different.
@rosinality Oh, I see, thank you very much!
Can you please upload some images while the training was happening. So like lower resolution images. e.g. 8X8 ones and then some 16X16 and so on. Mine look something like this
at 8X8 then like this at 16X16
. I am trying to distill one with some conditioning over the shape and appearance and so on so the very similar looking ones are by design. :-) and no mode collapse. however higher up in the resolution I get weird looking ones like globally disfigured ones. Did this happen to any one else? e.g.

Any idea what might be going wrong?
@ParthaEth Actually some weird samples can be happen as training steps between each phases are not that large. I think you can train more to get better results.
You mean at every step before switching to next resolution?
@ParthaEth Yes, you can train more at certain resolutions as training iterations during each phases are maybe not enough.