Kim Seonghyeon
Kim Seonghyeon
Currently does not have it.
It should be same. I think it could be directly convertible if keys are matches.
MappingNetwork corresponds to Generator.style and SynthesisNetwork corresponds to the rest of the generator. You can match keys in order, and you can refer to convert_weight.py as official pytorch implementation is...
affine corresponds to modulation. Noise weight and noise will corresponds to noise strength and noise const.
@bayndrysf I think it is a constant that not required in this implementation.
You can refer to projector.py.
I don't know much about colab, but it seems like that %run -m torch.distributed.launch does not corresponds to python -m torch.distributed.launch. Maybe this is the related issues. https://github.com/ipython/ipython/issues/8437
Currently I'm training the model.
1. Actually I found FID converges slower than vanilla stylegan 2. 2. I got about 5.1 FIDs with swagan, which is about 1 FIDs higher than vanilla stylegan 2. 3....
After experiments, using kernel size 3 FromRGB and Harr wavelet implementation from https://github.com/lpj-github-io/MWCNNv2 further reduce FIDs to 4.5. What I found interesting is that this wavelet implementation seems like that...