Jun-Yan Zhu
Jun-Yan Zhu
I hypothesize that the ResNet has fewer parameters and fewer downsampling, which are both good for color and style transfer, while U-Net has many more parameters (hard to learn these...
Great. Does it help in your experiments? Maybe @zsyzzsoft can help create a PR or a branch for all the models. We are doing some experiments with conditional GANs and...
For crop_size=512, batch_size=2 will require a significant amount of GPU memory. I am not sure how much memory you have per GPU. I am not sure if you need to...
Maybe you could use some data augmentation. Currently, both your load_size and crop_size are 512. Maybe use load_size 580 and crop_size 512. Certain artifacts will go away if the model...
Maybe your newly added loss is too strong compared to existing CycleGAN losses.
Maybe reducing the weight for your new loss.
maybe making the discriminator weaker or the generator stronger.
You can set different learning rates for G and D. See this [paper](https://arxiv.org/abs/1706.08500) for more details.
Please see these two discussions [1](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/150) and [2](https://github.com/phillipi/pix2pix/issues/116) @tinghuiz
I think we just assigned it to its nearest neighbor in color space. @tinghuiz @phillipi