generative-inpainting-pytorch
generative-inpainting-pytorch copied to clipboard
question about test
Hello,thanks that you can provide the coda for us. I read the code and I have some questions. Is your code only suitable for 256*256 images? I change the image_size in config.But in contextual layer it broke.
RuntimeError: Given transposed=1, weight of size [7238, 128, 4, 4], expected input[1, 7038, 46, 153] to have 7238 channels, but got 7038 channels instead.
In Yu's code, it can handle every size of images,so could you tell me what's the difference between yours and Yu's? Thanks!
When I implemented the code, I assumed the input size is 256*256. It is possible that some operation in the contextual attention layer makes it only compatible with the 256*256 image size.
Could you provide more details in the log information to help me find the problem quickly? I can't run the code at this time.
Thanks for replying,the inport figure's size is 370*1226 and I change it into 372*1228 to fit the network.Sorry for not sending the log information.The error is in the contextual attention layer.I am a rookie so I have no idea how to change it.
It seems that it is caused by some different padding in the contextual attention layer. You can compare the feature map's shape of each layer with Yu's version and find the layer. Or like what you have done, change the input shape to a compatible one.
I am sorry that I don't have time to check it at this time.
I have checked the code, and get some results.
My import size should be 368*1224 which can be devided by 8.
and the contextual layer is fine.The question may the code in discriminator, the input should be the mask region but in the code is the whole image. So it can only handle the image size<=256*256. Sorry I am a rookie I can only find this bug but can't fix it. Thanks for the code again,I just tell you this bug and I will try to learn to fix it.
@kinfeparty Thanks for your question. Yes, the discriminators make it only compatible with fixed sizes of images since there are fully-connected layers at the last layer. You may remove that layer and compute the mean as the output to make it compatible with any size of the input.
By the way, there are two discriminators, a global one with the whole image as the input and a local one with the mask region as the input.
@DAA233 Oh,sorry that I forgot the nn.linear! I just start to learn pytorch. Thanks for you replying.
@DAA233 Hello ,sorry to bother you again.Thanks for your advice I can start training with my dataset.
I have a similar question with #5 ,but I'm not sure.
The training is OK.My training image_shape is [256,256,3]
I modify and run the test_single.py with my figure,but found there is a boundary around the mask region.
I guess it's caused by the image shape,so I modify the image shape to 256*256.And use the examples/center_mask_256.png as mask. No boundary.
I modify the mask region to square mask, the situation appears.
I am confused with this situation.I will be thankful if you can tell me how to fix this bug.
I have no idea about the problem.
But I think you can check the network outputs first since the results above are copy-paste from the network outputs. Refers to here.
It's strange.I find that the mask of the output got a boundary but the input mask don't got it.I use cv2 to read the mask and the boundary disappears.But I don't know why the mask in examples don't get this boundary when I print it.
Adding binarization after mask resizing reduces "gray edge" artifact...
mask[mask > 0.5] = 1.0
mask[mask < 0.5] = 0.0
https://github.com/daa233/generative-inpainting-pytorch/blob/master/test_single.py?plain=1#L64