DeepFuse.pytorch
DeepFuse.pytorch copied to clipboard
incorrect display
I tested your code with pytorch0.4.1 in windows, but the fusion result cannot display correctly. Any idea what caused this issue and how to fix it?
Hi Bro, have you fixed this problem? I got the same result.
I got the same result.,too.
Have you all solved this problem? Please tell me.
I have the save problem, Have you all solved this problem? Please tell me.
I have the save problem, Have you all solved this problem? Please tell me.
Was this issue resolved? Please share how you were able to solve it?
我有保存问题,你们都解决了这个问题吗?请告诉我。
这个问题解决了吗?请分享您是如何解决的? debug进去看了一下,是那个sunner得版本问题,下载指定版本就运行
Here is mine.
I changed a picture and plot the YCbCr: I find it seams Y channel is good:
I FIX IT!!!
because it used transform.Normalize
We need to use inv_trans!
inv_trans = transforms.Compose([
transforms.Normalize(mean=[0., 0., 0.], std=[1/0.229, 1/0.224, 1/0.225]), # Recover the std
transforms.Normalize(mean=[-0.485, -0.456, -0.406], std=[1., 1., 1.]), # Recover the mean
])
This is my fixed inference fuction
def inference(model,im1,im2,opts):
# Load the Image
trans = transforms.Compose([
transforms.ToTensor(),
transforms.Resize((opts.H, opts.W), antialias=True), # type: ignore
transforms.Normalize(mean = [0.485, 0.456, 0.406], std = [0.229, 0.224, 0.225])
])
[im1, im2] = [path_to_ycbcr(im) for im in [im1,im2]]
assert(im1.size == im2.size)
[im1, im2] = [torch.unsqueeze(trans(im), 0)for im in [im1,im2]] # type: ignore
[im1, im2] = [im.to(opts.device) for im in [im1,im2]]
# Fusion
model.eval()
with torch.no_grad():
f_y = model.forward(im1[:,0:1,:,:], im2[:,0:1,:,:]) # Inference
[f_cb, f_cr] = weightedFusion(im1[:, 1:2], im2[:, 1:2], im1[:, 2:3], im2[:, 2:3])
[f_cb, f_cr] = weightedFusion(im1[:, 1:2], im2[:, 1:2], im1[:, 2:3], im2[:, 2:3])
fused = torch.cat((f_y,f_cb,f_cr),dim=1)
# Reconstruct the Fused RGB Image
inv_trans = transforms.Compose([
transforms.Normalize(mean=[0., 0., 0.], std=[1/0.229, 1/0.224, 1/0.225]),
transforms.Normalize(mean=[-0.485, -0.456, -0.406], std=[1., 1., 1.]),
])
fused = inv_trans(fused)
fused = ycbcr_to_rgb(transforms.ToPILImage()(fused[0,:,:,:]))
return transforms.ToTensor()(fused)
You can find code at my project: https://github.com/CharlesShan-hub/CVPlayground
Also I used some self-designed function:
def path_to_ycbcr(path: str) -> Image.Image:
"""
Load an image from the given path and convert it to YCbCr format.
"""
image = np.array(Image.open(path))
if len(image.shape) == 2:
image = color.gray2rgb(image)
image = color.rgb2ycbcr(image)
return Image.fromarray(image.astype(np.uint8), mode='YCbCr')
def ycbcr_to_rgb(image: Image.Image) -> Image.Image:
"""
Load YCbCr format Image and convert to RGB format.
"""
image_np = np.array(image)*1.0
image_rgb = color.ycbcr2rgb(image_np)*255.0
return Image.fromarray(image_rgb.astype(np.uint8), mode="RGB")