DeepFuse.pytorch icon indicating copy to clipboard operation
DeepFuse.pytorch copied to clipboard

incorrect display

Open zdfy31220 opened this issue 3 years ago • 8 comments

I tested your code with pytorch0.4.1 in windows, but the fusion result cannot display correctly. Any idea what caused this issue and how to fix it? result2

zdfy31220 avatar Jan 09 '22 09:01 zdfy31220

Hi Bro, have you fixed this problem? I got the same result.

pandayuanyu avatar Mar 03 '22 12:03 pandayuanyu

image I got the same result.,too.

YZG-gaiciyang avatar Mar 24 '22 02:03 YZG-gaiciyang

Have you all solved this problem? Please tell me.

DesBonbons avatar May 16 '23 19:05 DesBonbons

I have the save problem, Have you all solved this problem? Please tell me.

sirizhou avatar Dec 22 '23 16:12 sirizhou

I have the save problem, Have you all solved this problem? Please tell me.

Was this issue resolved? Please share how you were able to solve it?

maithal avatar Jan 04 '24 10:01 maithal

我有保存问题,你们都解决了这个问题吗?请告诉我。

这个问题解决了吗?请分享您是如何解决的? debug进去看了一下,是那个sunner得版本问题,下载指定版本就运行

rxqH avatar Apr 25 '24 09:04 rxqH

result Here is mine.

I changed a picture and plot the YCbCr: I find it seams Y channel is good: 截屏2024-07-13 下午8 00 39

CharlesShan-hub avatar Jul 13 '24 12:07 CharlesShan-hub

I FIX IT!!!

because it used transform.Normalize

We need to use inv_trans!

inv_trans = transforms.Compose([
    transforms.Normalize(mean=[0., 0., 0.], std=[1/0.229, 1/0.224, 1/0.225]),  # Recover the std
    transforms.Normalize(mean=[-0.485, -0.456, -0.406], std=[1., 1., 1.]),  # Recover the mean
])

This is my fixed inference fuction

def inference(model,im1,im2,opts):
    # Load the Image
    trans = transforms.Compose([
        transforms.ToTensor(),
        transforms.Resize((opts.H, opts.W), antialias=True), # type: ignore
        transforms.Normalize(mean = [0.485, 0.456, 0.406], std = [0.229, 0.224, 0.225])
    ])

    [im1, im2] = [path_to_ycbcr(im) for im in [im1,im2]]
    assert(im1.size == im2.size)
    [im1, im2] = [torch.unsqueeze(trans(im), 0)for im in [im1,im2]] # type: ignore
    [im1, im2] = [im.to(opts.device) for im in [im1,im2]]

    # Fusion
    model.eval()
    with torch.no_grad():
        f_y = model.forward(im1[:,0:1,:,:], im2[:,0:1,:,:])  # Inference
        [f_cb, f_cr] = weightedFusion(im1[:, 1:2], im2[:, 1:2], im1[:, 2:3], im2[:, 2:3])
        [f_cb, f_cr] = weightedFusion(im1[:, 1:2], im2[:, 1:2], im1[:, 2:3], im2[:, 2:3])
        fused = torch.cat((f_y,f_cb,f_cr),dim=1)
    
    # Reconstruct the Fused RGB Image
    inv_trans = transforms.Compose([
        transforms.Normalize(mean=[0., 0., 0.], std=[1/0.229, 1/0.224, 1/0.225]),
        transforms.Normalize(mean=[-0.485, -0.456, -0.406], std=[1., 1., 1.]),
    ])

    fused = inv_trans(fused)
    fused = ycbcr_to_rgb(transforms.ToPILImage()(fused[0,:,:,:]))

    return transforms.ToTensor()(fused)

99

You can find code at my project: https://github.com/CharlesShan-hub/CVPlayground


Also I used some self-designed function:

def path_to_ycbcr(path: str) -> Image.Image:
    """
    Load an image from the given path and convert it to YCbCr format.
    """
    image = np.array(Image.open(path))
    if len(image.shape) == 2:
        image = color.gray2rgb(image)
    image = color.rgb2ycbcr(image)
    return Image.fromarray(image.astype(np.uint8), mode='YCbCr')


def ycbcr_to_rgb(image: Image.Image) -> Image.Image:
    """
    Load YCbCr format Image and convert to RGB format.
    """
    image_np = np.array(image)*1.0
    image_rgb = color.ycbcr2rgb(image_np)*255.0
    return Image.fromarray(image_rgb.astype(np.uint8), mode="RGB")

CharlesShan-hub avatar Jul 20 '24 13:07 CharlesShan-hub