pytorch-CycleGAN-and-pix2pix
pytorch-CycleGAN-and-pix2pix copied to clipboard
The test results of pix2pix are too bad.
I am trying to convert an rgb image to an infrared image style through pix2pix training.
I have found that it works very well when learning.
However, if the test is performed with the same model and same parameters, the results are very poor. What is the reason?
Please help me.
(first picture is original The second picture is the picture saved during the train The third picture is the actual test result)
.
Could you share with us the training and test command lines?
Did you use the same flags (e.g., -preprocess
)?
Another thing might be evaluating the model with eval() mode turned on and off (link). Could you run the test mode with and without the --eval
option and see if that makes a difference?
Another thing might be evaluating the model with eval() mode turned on and off (link). Could you run the test mode with and without the
--eval
option and see if that makes a difference?
I am also working on same project, to convert RGB to IR. I also have same observation as above. But in my case, running test without '--eval' does not do much good. My results are not as bad as above. But, there is certainly significant loss of detail in the fake images when running test. I have attached some images below as an example.
While training, the fake image (left) and real image (right) looks something like this.
And during test the fake image (left) and real image (right) looks like this.
Is there any other way for me to improve these results? I am currently using part of KAIST dataset.
The model might overfit the training set. To prevent overfitting, you can either use a larger dataset or apply more aggressive augmentation (see the option --preprocess
for more details.)