pytorch-CartoonGAN
pytorch-CartoonGAN copied to clipboard
Can't reproduce the results, maybe sensitive to data?
I ran the training scripts directly, but can't reproduce the results. It seems there is no significant difference between the original and generated images:
Is it sensitive to training data? I also used face-cropped celebA as src_data:
and face-cropped danbooru2018 as tgt_data:
Each dataset contains about 1600 images (for fast training)。 So, where is the problem? THX~
Also, it seems that the Initialization phrase suffers from checkerboard effect, as illustrated in https://distill.pub/2016/deconv-checkerboard/.
I changed ConvTranspose2d to Upsample+Conv2d (as suggested in the above post), but the quality of image generated drops a lot.
@dejianchen1989 i think the checkerboard effect happened at original papers too. Besides, there is a error while computing:D_fake_loss = BCE_loss(D_fake, fake)
ValueError: Target and input must have the same number of elements. target nelement (8192) != input nelement (38400)
i found this is because the different shape between D_fake and Fake variables should i resize all the natural image to 256,256 before training process?
@dejianchen1989 Hi, I can't reproduce the results with CelebA and Cartoon's imgs,either.Gen Loss and Con Loss dosn't decrease.
e = y[:, :, :, args.input_size:] y = y[:, :, :, :args.input_size] here i'm getting error
May I ask if your problem has been solved? My training data is not very different from the raw data either.