chainer-partial_convolution_image_inpainting
chainer-partial_convolution_image_inpainting copied to clipboard
unwanted boarder around the hole due the way tv loos calculated?
I have trained the network using the last updated code but produce unwanted boarder surrounding the hole. I used custom mask. look at the following output mask as [example...](url
)
I appreciate if anyone come up with a solution for this problem. Thank you.
I don't know what you actually did to generate your custom mask. However, the generated mask image in evaluation step are the mask postprocessed by batch_postprocess_images in utils.py. I hope it will help you.
https://github.com/SeitaroShinagawa/chainer-partial_convolution_image_inpainting/blob/master/evaluation.py#L47
https://github.com/SeitaroShinagawa/chainer-partial_convolution_image_inpainting/blob/master/utils.py
import numpy as np
def batch_postprocess_images(img, batch_w, batch_h):
b, ch, w, h = img.shape
img = img.reshape((batch_w, batch_h, ch, w, h))
img = img.transpose(0,1,3,4,2)
img = (img + 1) *127.5
img = np.clip(img, 0, 255)
img = img.astype(np.uint8)
img = img.reshape((batch_w, batch_h, w, h, ch)).transpose(0,2,1,3,4).reshape((w*batch_w, h*batch_h, ch))
return img
Best,
I don't know what you actually did to generate your custom mask. However, the generated mask image in evaluation step are the mask postprocessed by batch_postprocess_images in utils.py. I hope it will help you.
https://github.com/SeitaroShinagawa/chainer-partial_convolution_image_inpainting/blob/master/evaluation.py#L47
https://github.com/SeitaroShinagawa/chainer-partial_convolution_image_inpainting/blob/master/utils.py
import numpy as np def batch_postprocess_images(img, batch_w, batch_h): b, ch, w, h = img.shape img = img.reshape((batch_w, batch_h, ch, w, h)) img = img.transpose(0,1,3,4,2) img = (img + 1) *127.5 img = np.clip(img, 0, 255) img = img.astype(np.uint8) img = img.reshape((batch_w, batch_h, w, h, ch)).transpose(0,2,1,3,4).reshape((w*batch_w, h*batch_h, ch)) return img
Best,
Thank you for your imediate reply. My input mask is simple mask generated by threshold value of an image (see bellow). like you said the problem might be during postprocessing.
After spending two days to solve this problem I finally found the cause and solved it. Let me explain it here with my poor English. The problem comes with converting the mask image to the default setup of grayscale. while converting the mask image to grayscale format the value of pixels change to 0 up to 255(those surrounding pixel around the hole and inside the hole converted to anywhere in between 0 to 255). However the value of pixels in the mask image supposed to be 0 and 255(black and white). For this reason, while selecting the hole during training those pixels in between black and white ignored. There are two solutions to solve this.
- Instead of using RGB image for the mask, use binary image and extend the dimension for broadcasting(recommended solution)
- change the default setup of image conversion like this(simple solution):
mask_transform = transforms.Compose([transforms.Resize(size=size), transforms.ToTensor()]) mask = mask_transform(mask.convert(mode='L', matrix=None, dither=None, palette='ADAPTIVE', colors=255)
or just edit this inside place2.py for both train and test set functions
mask = cv2.imread(idM, cv2.IMREAD_GRAYSCALE) r, mask=cv2.threshold(mask ,128,255, cv2.THRESH_BINARY) return img, mask
This setup changes the pixels between black and white to the nearest pixel color. I have used this and trained for few iterations. the border/obvious line surrounding the hole/mask is greatly reduced. Am guessing with the first solution the border or lines could be completely avoided. Best of luck.