pytorch-CycleGAN-and-pix2pix
pytorch-CycleGAN-and-pix2pix copied to clipboard
Transformations are blurry and reconstructions good
Hi, I am currently using your CycleGAN code. I am however experience that the transformations are blurry and totally gray, where the reconstructions are really nice. I tried to reduce the learning rate from 0.0002 to 0.00002, but that didn't really work. Do you know if anything else would work? I find it strange that the reconstructions are great, but the transformations are really bad.
Do you have any suggestions for this matter?
It happens when the transformation is difficult to learn or two domains look drastically different. Not sure if reducing the learning rate will help in your case. I will try (1) adding some paired data or (2) using small cropped patches rather than the entire image.
I opened a new issue here #1286 since I noticed a similar to some extent behavior of CycleGAN in case of RGB to IR/thermal translation. @kpagels if you have any thoughts please feel free to share.
当转换难以学习或两个领域看起来截然不同时,就会发生这种情况。不确定降低学习率是否会对您的情况有所帮助。我将尝试(1)添加一些配对数据或(2)使用小的裁剪补丁而不是整个图像。
我遇到的情况是转换进行的还可以,但是重建过程中会出现文字变形的情况,请问有什么好的方法来调整吗。
@duke023456 It is hard for CycleGAN to preserve the text in an image. If you can detect the text using an OCR method, you can try to preserve them using an additional loss.
我在做的是消除图片中的印章,同时还要能够保留文字部分,现在的结果就是文字部分保留的不够,请问有什么办法解决吗
You can detect the region that contains the text using an OCR detection method, and add a loss function to preserve the pixels within the region.