progressive-growing-torch
progressive-growing-torch copied to clipboard
Torch implementation of "PROGRESSIVE GROWING OF GANS FOR IMPROVED QUALITY, STABILITY, AND VARIATION"
The results was not satisfying, so I newly made PyTorch version of this code. Currently, I am working on this with PyTorch. I think PyTorch version is way much faster...
I found the weight freezing function WAS NOT WORKING so far, and I managed to know that. I fixed the bug and confirmed the weights freeze properly. This was actually...
(https://github.com/torch/nn/blob/master/WeightNorm.lua)
~~~ ------------ alpha:0 1-alpha:1 [1] grad sum:0.66109210252762 [2] alpha + 1-alpha:0.66109210252762 ------------ alpha:0 1-alpha:1 [1] grad sum:-0.11638873815536 [2] alpha + 1-alpha:-0.11638873815536 ------------ alpha:0 1-alpha:1 [1] grad sum:0.82711493968964 [2] alpha +...
It's a great implementation. And I can see that it only costed 12 minute to get 91872/202599. I wonder what your hardware setting is because it took about 1 hour...
I found PGGAN is very sensitive to the network structure, and I think it would be very helpful if the already tested network is shared.
For test: ~~~ Generator structure: nn.Sequential { [input -> (1) -> (2) -> output] (1): nn.Sequential { [input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6)...
using ReLU for discriminator seem to help stabilizing network. I don't know why because the paper used leaykRelu for both gen and dis. need more experiment maybe.
close look at the README. you might need to have dummy folder and put all images into there. otherwise, you will encounter: 
The paper used variant of Local response normalization, but I just used nn.SpatialCrossMapLRN(1) for convenience of implementation.