Gongfan Fang
Gongfan Fang
Hi @Pixie412 , may be the inconsistency was caused by the paramters of ``skimage.measure.compare_ssim`` You may want to use it as follows: ```python ssim_skimage = structural_similarity(img, img_noise, win_size=11, multichannel=True, sigma=1.5,...
Hi @akashsara , the repo is consistent with tensorflow's ssim. So, it does not requires any conversions if RGB ssim is what you want.
BTW, the benckmark in readme was based on RGB ssim/ms-ssim.
Hi @Ford666, please make sure that the input is a torch.Tensor.
The default behavior of `structural_similarity` is inconsistent with the original paper. Please add these args: ```python ssim_skimage = structural_similarity(img, img_noise, win_size=11, multichannel=True, sigma=1.5, data_range=1, use_sample_covariance=False, gaussian_weights=True) ```
Hi @WYCAS , it seems that the input tensor was a RGB image. Could you provide more detials, such as a minimal example to produce this error?
The ssim metric worked well on my laptop. Maybe the parameter `channel` was missing in your code? ```python ssim = SSIM(data_range=1., channel=1) ```
Hi @ooodragon94, sorry for the late reply. Maybe your decoded images are not strictly limited in [0, 1] ?
Hi @stevenewbold , maybe you can resize your images to fit the resolution requirement.
Hi @hezichuanqi, ignoring backgroud pixels is a common practice in scene parsing. These background pixels are typically outliers which can not appropriately annotated. > So, my questions is: when i...