stylegan2-pytorch
stylegan2-pytorch copied to clipboard
Models with network-capacity higher than 16 never converge
Hi, I'm trying to improve the quality of the results by increasing the network-capacity of the model to values like 32 and 24.
However, unlike with default value of 16, models with higher values never seem to converge.
For instance, here's the result at 12K (network-capacity of 32):
At the same number of steps and with the same settings except network capacity, a model with the capacity of 16 already shows rough, but quite distinct shapes, while 32 doesn't look anything like it.
Here are my settings:
--save_every 100 --image-size 256 --batch-size 12 --gradient-accumulate-every 4 --network-capacity 32 --evaluate-every 100 --aug-prob 0.27 --aug-types [translation] --calculate-fid-every 5000
And the size of the dataset is roughly 5400 well curated images of human bodies.