Rethinking-Inpainting-MEDFE
Rethinking-Inpainting-MEDFE copied to clipboard
very bad prediction.
Hi, can you predict this image. please. The mask can be obtained by the following code
def get_mask(path):
m = cv2.imread(path)
new_mask = np.zeros(shape=m.shape, dtype=np.uint8)
m = np.mean(m, axis=2)
y, x = np.where(m == 255)
new_mask[y, x] = 255
new_mask = Image.fromarray(new_mask)
return new_mask
I get a very bad result. I use pre-trained model of "place2" dataset. The result is bad.
very interesting! I think may be the type of mask is wrong, the mask is not cover the white regions in your image. I dilate the mask simplely and make the mask boundary close to the white regions boundary, the results seems more reseaonable. As shown images blow, from left to right: the image you give, the image that the true input of model, the output and the mask
I test the other mask which I have:
Your orginial image:
@KumapowerLIU Yes, your result is better. Thank you for helping me. The reason is that mask doesn't cover the white area. By the way, I am curious about what tool can make this mask exhibited by you.
Hello author, I found that using a 128128 mask in the center of the celeba dataset does not have a particularly good effect. For example, using a 120120 mask can achieve better results. Why?
Hi, can you predict this image. please. The mask can be obtained by the following code
def get_mask(path): m = cv2.imread(path) new_mask = np.zeros(shape=m.shape, dtype=np.uint8) m = np.mean(m, axis=2) y, x = np.where(m == 255) new_mask[y, x] = 255 new_mask = Image.fromarray(new_mask) return new_mask
I get a very bad result. I use pre-trained model of "place2" dataset. The result is bad.
您好,请问您是如何测试的,为什么我测试的结果跟原始图像一模一样