BDL
BDL copied to clipboard
First Image Transltation
Hi, I have read the paper and still have som problems.
-
CycleGAN is trained with a perceptual loss. Does the first image translation use the perceptual loss? If so, which segmentation model parameters is used. Is the source only model with 33.6 mIoU in the paper?
-
With first traslated images in hand, when starting the adversarial training of the segmentation model, is the initial model parameters the ImageNet pretrained parameters or the source-image pretrained parameters with mIoU of 33.6?
For your first question, I tried both scenarios and get similar results. Thus, you can train CycleGAN without perceptual loss. For the second one, the model is pertained with ImageNet.
For your first question, I tried both scenarios and get similar results. Thus, you can train CycleGAN without perceptual loss. For the second one, the model is pertained with ImageNet.
Thanks for quick reply! And I found it very slow to train cyclegan with perceptual loss(it may takes around a month in my situation, I mentioned this under another question ). So I'm suprised that you just spent 4 days. Did you use single GPU? or multi GPUS?
I use 4 gpus
I use 4 gpus
Thanks, that might be normal. The GPU I used is not compatible to teslaV100. Is it convenient for you to upload the first translation image? Anyway, It's also ok for not. While you train cyclegan with larger batchsize, is the initial learning rate you used same with standard cyclegan ?
You can train with less epochs. I upload the parameters I use. You can refer to it.
You can train with less epochs. I upload the parameters I use. You can refer to it. Thanks very much! I've seen it.