MUNIT
MUNIT copied to clipboard
Worse translation results after updating to pytorch 0.4.0
Hi all, After updating to pytorch 0.4.0, I try to train MUNIT on the summer2winter_yosemite dataset from Cyclegan (https://github.com/junyanz/CycleGAN) yet the results are of ill quality. I tried the initial released version of MUNIT before and it worked perfectly well. Is anyone facing the same issue after switching to pytorch 0.4.0?
Here is the snapshot after 150000 iterations:

I use the default configuration.
Thank you.
@HsinYingLee I also have the similar problem, how many images do you have in your training set(trainA and trainB)?
@MilanKolkata According to the the recent commit, the degraded issue is due to setting <track_running_stats=True> for instance normalization. I haven't tried the updated code yet but I believe the problem should be fixed.
@HsinYingLee Thanks for mentioning it. I am trying the new code. How many images did you use when training the model?
@MilanKolkata In the commit 972e42, the custom layernorm only supports one image per batch. With the new commit 4c21350, it supports multiple images per batch. However, the time required for each iteration would increase about 4 times when you use a batch size greater than 1. (This is due to the change of the way pytorch implements view function in 0.4. For training with batch size greater than 1, please roll back to pytorch 0.3 and use munit_pytorch0.3.
BTW, I am still confirming if the performance is the same for pytorch 0.3 and 0.4.
@mingyuliutw Thanks! I will try munit_pytorch0.3 and hope it could speed the training up. Also, expect the code supports the multi-GPUs in the future.
@HsinYingLee and @MilanKolkata I think the code is now working properly in pytorch 0.4 now.
We spent sometime in the past few days playing with this summer2winter_yosemite256. We found that when enabling explicit cycle consistency loss, the model converges faster and better. The new config file can be found in configs/summer2winter_yosemite256_folder.yaml .