co-mod-gan
co-mod-gan copied to clipboard
Parameter tuning and re-implementation with Pytorch
First, thank you for the impressive work! Currently, I am re-implementing a pytorch version of co-mod-gan, and I have several questions regarding the model:
- Have you tried different R1 regularization? Empirically, I found that when using a small R1 than 10, the convergence of l1 loss is faster, I wonder if you tried other R1 weights?
- Will dropout of the global code improves the performance?
- Have you tried adding a skip connection to the encoder?
- Also why the style mixing weight is set to 0.5?
Thanks
Unfortunately I may not have useful information regarding your questions. Most of the hyperparameters were only chosen by intuition as we didn't have much resource to run the experiments.
This sounds amazing @htzheng. I am looking forward for the code. Good luck. I would love to try out Pytorch code, since Tensorflow 1 is painful to work with. Didn't manage to use Tensorflow 2 or convert the model to onnx, which makes co-mod-gan impossible to use with new GPUs. With Pytorch the usage should be easy and I could add it to my own code.
@styler00dollar It is still hard for me to release the code while I am doing the summer internship, but I will try releasing the code after September. You could try modifying the training code and model from https://github.com/rosinality/stylegan2-pytorch