DiscoGAN-pytorch icon indicating copy to clipboard operation
DiscoGAN-pytorch copied to clipboard

It seems that the discriminator cannot adapt to the size of the input image.

Open SwordHolderSH opened this issue 5 years ago • 1 comments

Once I set '-- input_scale_size' not to 64, an error will be reported. It seems that the discriminator cannot adapt to the size of the input image. How did you solve the problem? "ValueError: Target and input must have the same number of elements. target nelement (2) != input nelement (338)"

SwordHolderSH avatar Nov 09 '19 20:11 SwordHolderSH

I check the shapes of output tensors. When input size is 64 *64: torch.Size([2, 64, 32, 32]) torch.Size([2, 64, 32, 32]) torch.Size([2, 128, 16, 16]) torch.Size([2, 128, 16, 16]) torch.Size([2, 128, 16, 16]) torch.Size([2, 256, 8, 8]) torch.Size([2, 256, 8, 8]) torch.Size([2, 256, 8, 8]) torch.Size([2, 512, 4, 4]) torch.Size([2, 512, 4, 4]) torch.Size([2, 512, 4, 4]) torch.Size([2, 1, 1, 1]) torch.Size([2, 1, 1, 1])

When it is 256*256: torch.Size([2, 64, 128, 128]) torch.Size([2, 64, 128, 128]) torch.Size([2, 128, 64, 64]) torch.Size([2, 128, 64, 64]) torch.Size([2, 128, 64, 64]) torch.Size([2, 256, 32, 32]) torch.Size([2, 256, 32, 32]) torch.Size([2, 256, 32, 32]) torch.Size([2, 512, 16, 16]) torch.Size([2, 512, 16, 16]) torch.Size([2, 512, 16, 16]) torch.Size([2, 1, 13, 13]) torch.Size([2, 1, 13, 13])

So I add two max-pooling layers, to make it roughly work...

SwordHolderSH avatar Nov 09 '19 20:11 SwordHolderSH