style-based-gan-pytorch
style-based-gan-pytorch copied to clipboard
Bug in function accumulate
The accumulate function found in train.py - https://github.com/rosinality/style-based-gan-pytorch/blob/master/train.py#L25. Only accumulates running average of optimiser trainable parameters fo the generator model. If the generator has parameters that are not trained by optimiser such as the ones found in batch_norm layers, then this function will not accumulate those parameters. Stylegan model specifically does not have those kind of parameters so for this particular project this function is valid but should not be copied over to another project.
Is there a reason you made same issue for this repositories and this (https://github.com/rosinality/stylegan2-pytorch/issues/172)?
Just so people who use this repo gets a warning. Also, these are two unrelated repos
@rosinality Why do you use EMA to update the model rather than directly optimize the model?
@shoutOutYangJie The exponential moving average is used to increase stability and sample quality in GANs. Check Section C.1 of Large Scale GAN Training for High Fidelity Natural Image Synthesis
@ParthaEth The original Implementation uses the function setup_as_moving_average_of defined here They explicitly set the beta for nontrainable variables to 0, thus the implementation provided by @rosinality matches the official implementation exactly in this regard.