pi-GAN
pi-GAN copied to clipboard
Traing strategy on celeba
Hi, thanks for your great work! I am wondering that the training stategy on celeba showed in curriculum.py seems to be not consistent with what you stated in paper, i.e. paper said progressive training strategy was used, while celeba is only trained on 64 x 64 resolution in code. Could you update the cirrculum.py for reproducing result more conveniently?
I have the same question. Could you provide the curriculum file for CelebA that produces the number in the paper? Also, whats the GPU memory requirement for that training? Thanks!
Hi, @hytseng0509 I think they have stated clearly about the GPU memory requirement in the appendix part of their paper. Namely, two RTX 6000 GPUs or a single RTX 8000 GPU.
Hi! I tried the settings below for CelebA (I used 2080Ti and it didn't went OOM ): 0: {'batch_size': 40, 'num_steps': 12, 'img_size': 32, 'batch_split': 2, 'gen_lr': 6e-5, 'disc_lr': 2e-4}, int(10000): {'batch_size': 20, 'num_steps': 12, 'img_size': 64, 'batch_split': 4, 'gen_lr': 3e-5, 'disc_lr': 1e-4}, int(50000): {'batch_size': 8, 'num_steps': 12, 'img_size': 128, 'batch_split': 8, 'gen_lr': 1e-5, 'disc_lr': 5e-5}, int(200000): {},
I think that we don't have to go through all the training steps(200000), which is too time consuming. It was observed that when I merely trained the model for about 1h, it could generate not bad images. PS: This set means that we train 10000 steps, 40000 steps and 150000 steps for 32x32, 64x64, 128x128 respectively. With this set, it will take about 9h to finish the first two stages(32x32 and 64x64) with a 2080Ti.