style-based-gan-pytorch
style-based-gan-pytorch copied to clipboard
Question about progressive growing
8/16/32/64/128/256/512/1024
Will results for each pyramid scale will be good? or it only guaranteed that result for highest resolution will be good?
In my experience, results from all of the resolutions should be good, as the model progresses up. It is unlikely that the final resolution will be good if the earlier resolutions are bad .
On Thu, Nov 7, 2019, 8:59 AM mrgloom [email protected] wrote:
8/16/32/64/128/256/512/1024
Will results for each pyramid scale will be good? or it only guaranteed that result for highest resolution will be good?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/rosinality/style-based-gan-pytorch/issues/62?email_source=notifications&email_token=AH5YSNP4EQKZLH3WO24TJ2LQSQNMLA5CNFSM4JKG7GZ2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4HXTUM3Q, or unsubscribe https://github.com/notifications/unsubscribe-auth/AH5YSNP6B2XGTJGEMYJG2F3QSQNMLANCNFSM4JKG7GZQ .
Samples will be good, but it isn't very high quality as training iterations would be somewhat short at low resolution phases.
I was saving images each 10k steps and this is an image before 64p stage, i.e. it's 32p images grid at 110k steps. And images obviously have artifacts.
110k - 32p images grid.
120k - 64p images grid.

Is this expected? i.e. we will have 'near perfect quality' only at final resolution in progressive growing setting? Is it meaningless to have 'near perfect quality' at each intermediate resolution? i.e. something like this https://github.com/akanimax/BMSG-GAN/
Yes, it will be not very high quality especially at lower resolutions. And you will need truncation tricks for better samples. You don't need to have high quality samples at lower resolutions for the final quality, but if you need it, I think you can train more steps at each resolutions.
For now I have trained model for 320k+ steps in ~130 hours.
320k - 128p images grid:

For now it's training at 128p resolution (maximum resolution in my setting) with alpha=1.0 As we can see quality of images still not perfect, does it need to train more? How 'good' images should look before truncation trick?
I think it is almost enough if you use truncation tricks.