co-mod-gan icon indicating copy to clipboard operation
co-mod-gan copied to clipboard

Parameter tuning and re-implementation with Pytorch

Open htzheng opened this issue 3 years ago • 3 comments

First, thank you for the impressive work! Currently, I am re-implementing a pytorch version of co-mod-gan, and I have several questions regarding the model:

  1. Have you tried different R1 regularization? Empirically, I found that when using a small R1 than 10, the convergence of l1 loss is faster, I wonder if you tried other R1 weights?
  2. Will dropout of the global code improves the performance?
  3. Have you tried adding a skip connection to the encoder?
  4. Also why the style mixing weight is set to 0.5?

Thanks

htzheng avatar May 22 '21 03:05 htzheng

Unfortunately I may not have useful information regarding your questions. Most of the hyperparameters were only chosen by intuition as we didn't have much resource to run the experiments.

zsyzzsoft avatar May 22 '21 17:05 zsyzzsoft

This sounds amazing @htzheng. I am looking forward for the code. Good luck. I would love to try out Pytorch code, since Tensorflow 1 is painful to work with. Didn't manage to use Tensorflow 2 or convert the model to onnx, which makes co-mod-gan impossible to use with new GPUs. With Pytorch the usage should be easy and I could add it to my own code.

styler00dollar avatar Jun 02 '21 20:06 styler00dollar

@styler00dollar It is still hard for me to release the code while I am doing the summer internship, but I will try releasing the code after September. You could try modifying the training code and model from https://github.com/rosinality/stylegan2-pytorch

htzheng avatar Jun 05 '21 12:06 htzheng