pytorch-pretrained-BigGAN
pytorch-pretrained-BigGAN copied to clipboard
Fine-tune BigGAN?
Would it be possible to finetune this BigGAN implementation to a custom dataset, in order to generate new classes of images?
Hi, the discriminator weights are not public so you will have to find a work-around to fine-tune the model as a GAN.
Do you think it works to train a custom discriminator from scratch, while just fine-tuning the generator weights?
Do you think it works to train a custom discriminator from scratch, while just fine-tuning the generator weights?
Hi, Do you figure it out if it works?
I am trying to do some domain shift with BigGAN. The weights of the discriminator and of the generator are available here: https://github.com/ajbrock/BigGAN-PyTorch.
I am still trying different setups and hyper-parameters. But I am still not getting to the desired result.
Hi, can I fine-tune the biggan model with a small amount of datasets?
Hi, can I fine-tune the biggan model with a small amount of datasets?
What worked the best for me was to download the imagenet dateset and change one of the classes with your own data. You won't see any meaningful shift before 10k iterations though.
Hi,
The comment on the code below states that this needs to be re implemented if we want to train the generator.
class BigGANBatchNorm(nn.Module): """ This is a batch norm module that can handle conditional input and can be provided with pre-computed activation means and variances for various truncation parameters. We cannot just rely on torch.batch_norm since it cannot handle batched weights (pytorch 1.0.1). We computate batch_norm our-self without updating running means and variances. If you want to train this model you should add running means and variance computation logic. """
If you just want to use the generator to flow gradients through, but don't want to change the weights of the generator, do you have to re-implement the batchnorm, or leave as is?
Thank you!
@VictorZuanazzi how can i change one classe on my own dataset?
@nnajeh what I did was pretty simple. I downloaded the Imagenet dataset, each class is stored in a separate folder. I selected a few folders, deleted the images inside the folders, and added my own data to them. The new data acts as regularization for the training and you can continue training the pre-trained model for a few tens of iterations without collapsing the generator.
@VictorZuanazzi even if i don't have the same image sizes of the imagenet?
The images of ImageNet have all sorts of different sizes, the data loader should include transformations for resizing them to the desired dimensions.
You can check for the documentation on the transformations here: https://pytorch.org/vision/0.8/transforms.html. It would look like this:
transformations = transforms.Compose([transforms.Resize(64), transforms.ToTensor(), ])
@VictorZuanazzi and should i rename the images the same as of imagenet?
That was not necessary for me, but I don't know how your dataset class is set up.
@VictorZuanazzi my data are png files so I cannot put them with Imagenet files? I am using this code to finetune the model on my own dataset, am I correct?
!python train.py \ --experiment_name fineTune --resume \ --dataset I128_hdf5 --parallel --shuffle --num_workers 8 --batch_size 256 --load_in_mem \ --num_G_accumulations 8 --num_D_accumulations 8 \ --num_D_steps 1 --G_lr 1e-4 --D_lr 4e-4 --D_B2 0.999 --G_B2 0.999 \ --G_attn 64 --D_attn 64 \ --G_nl inplace_relu --D_nl inplace_relu \ --SN_eps 1e-6 --BN_eps 1e-5 --adam_eps 1e-6 \ --G_ortho 0.0 \ --G_shared \ --G_init ortho --D_init ortho \ --hier --dim_z 120 --shared_dim 128 \ --G_eval_mode \ --G_ch 96 --D_ch 96 \ --ema --use_ema --ema_start 20000 \ --test_every 2000 --save_every 1000 --num_best_copies 5 --num_save_copies 2 --seed 0 \