stylegan2-pytorch
stylegan2-pytorch copied to clipboard
The generated samples have very little difference
I'm trying to generate some avatars of NBA players.
After 5000 iterations, I can barely see the silhouette of the player. But the generated samples look the same. Is this normal?
The command I run is
stylegan2_pytorch --data statmuse_NBA --name statmuse --multi-gpus --transparent --batch-size 64 --aug-prob 0.25 --num-train-steps 100000 --no-pl-reg
How many images do you have in your dataset?
How many images do you have in your dataset?
1200 images, is it too few?
Not necessarily too few. You could increase --aug-prob, include color in aug_types (next to default translation and cutout), and/or you can use a pre-trained model. I hope this will help a little!
Not necessarily too few. You could increase --aug-prob, include color in aug_types (next to default translation and cutout), and/or you can use a pre-trained model. I hope this will help a little!
@Otje89 Thank you for your suggestion, I will try it.
Finally got some results that look OK.
@FrazierLei was --aug-prob
your path to better results?
@chris-aeviator It does matter!
I increased the --aug-prob
to 0.7 and collected more images for training.
I also tried transfer learning(from CelebA to my customed dataset)
@FrazierLei cool! Could you share the number of images you have been using to generate these?
@chris-aeviator I removed some images that are actually the same player.
E.g
Then I collected some avatars of players from other leagues. The total number of images is about 1.5k.
@FrazierLei Nice work! I'm really curious how you went by to do transfer learning with this implementation. Would you mind explaning?
+1