style-based-gan-pytorch icon indicating copy to clipboard operation
style-based-gan-pytorch copied to clipboard

Pretrained model?

Open voa18105 opened this issue 6 years ago • 18 comments

Can you / will you upload any pretrained model to compare results, please?

voa18105 avatar Feb 13 '19 11:02 voa18105

https://drive.google.com/file/d/1zmVRIXk8HHddLmibKcEYpT0gixUfPlSp/view?usp=sharing

You can use this. 600k checkpoint of generator.

rosinality avatar Feb 13 '19 11:02 rosinality

Awesome! Thank you, man! I will tell you in a couple of days (hopefully) if I've managed to reproduce your results (reach more or less same quality)

voa18105 avatar Feb 13 '19 12:02 voa18105

Awesome! Thank you, man! I will tell you in a couple of days (hopefully) if I've managed to reproduce your results (reach more or less same quality)

Did you use pretrained model??

Johnson-yue avatar Feb 15 '19 07:02 Johnson-yue

@Johnson-yue Yes, it works good

voa18105 avatar Feb 15 '19 08:02 voa18105

@rosinality I've reached more or less same result. Did you try CelebA-HQ? How do you think if I will increase number of layer - will it generate HQ images?

voa18105 avatar Feb 18 '19 14:02 voa18105

@voa18105 I didn't tried, but I think it will work - network architecture is almost same. But you will also need to modify train script, as I hard coded things a lot. :/

rosinality avatar Feb 18 '19 23:02 rosinality

@rosinality thanks, I think I am gonna try it nearest days

voa18105 avatar Feb 19 '19 08:02 voa18105

Hi, Would it be possible to make the discriminator weights available too? I would like to be able to fine tune the model. Thanks so much :)

anuragranj avatar May 10 '19 09:05 anuragranj

@anuragranj https://drive.google.com/file/d/1SFn7GygaLYhOobNQH_eqICcQETDcBC0X/view I think you can use this. Trained using FFHQ, 140k iter, 256px.

rosinality avatar May 10 '19 12:05 rosinality

Thanks a lot @rosinality ! :)

anuragranj avatar May 10 '19 12:05 anuragranj

@anuragranj https://drive.google.com/file/d/1SFn7GygaLYhOobNQH_eqICcQETDcBC0X/view I think you can use this. Trained using FFHQ, 140k iter, 256px.

I compared this weigts and weits that you gave in the issue #13 Are they the same? As far as I understand they are different both in quality and somehow in the architecture (maybe I'm wrong about architecture, but somewhy I get an error in to_rgb dimensions in my wrapper, but maybe it's my problem).

PgLoLo avatar May 17 '19 11:05 PgLoLo

In the #13 you stated that the checkpoint is made on the same 140k iterations. But comparing the same latent vector images are similar but different in quality: from the #13 image

from current issue: image

Maybe you occasionally mixed up train iteration?

PgLoLo avatar May 17 '19 11:05 PgLoLo

@PgLoLo https://drive.google.com/file/d/1SFn7GygaLYhOobNQH_eqICcQETDcBC0X/view is generator/discriminator checkpoints, and https://drive.google.com/file/d/1TVdUGOcMRVTVaxLhmh2qVmPgWIlwE0if/view is running average of generator weights. So they are different.

rosinality avatar May 17 '19 11:05 rosinality

@rosinality Oh, I see, thank you very much!

PgLoLo avatar May 17 '19 12:05 PgLoLo

Can you please upload some images while the training was happening. So like lower resolution images. e.g. 8X8 ones and then some 16X16 and so on. Mine look something like this image at 8X8 then like this at 16X16 image. I am trying to distill one with some conditioning over the shape and appearance and so on so the very similar looking ones are by design. :-) and no mode collapse. however higher up in the resolution I get weird looking ones like globally disfigured ones. Did this happen to any one else? e.g. image

Any idea what might be going wrong?

ParthaEth avatar Nov 04 '19 16:11 ParthaEth

@ParthaEth Actually some weird samples can be happen as training steps between each phases are not that large. I think you can train more to get better results.

rosinality avatar Nov 05 '19 05:11 rosinality

You mean at every step before switching to next resolution?

ParthaEth avatar Nov 05 '19 10:11 ParthaEth

@ParthaEth Yes, you can train more at certain resolutions as training iterations during each phases are maybe not enough.

rosinality avatar Nov 07 '19 14:11 rosinality