IQA-PyTorch icon indicating copy to clipboard operation
IQA-PyTorch copied to clipboard

all result is 0

Open nkjulia opened this issue 2 years ago • 10 comments

i try to use the example script to assess some images.but i got 0 for all the images.why?

import pyiqa
import torch

# list all available metrics
print(pyiqa.list_models())

device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")

# create metric with default setting
iqa_metric = pyiqa.create_metric('lpips', device=device)
# Note that gradient propagation is disabled by default. set as_loss=True to enable it as a loss function.
iqa_metric = pyiqa.create_metric('lpips', device=device, as_loss=False)

# create metric with custom setting
#iqa_metric = pyiqa.create_metric('psnr', test_y_channel=True, color_space='ycbcr').to(device)

# check if lower better or higher better
print(iqa_metric.lower_better)


import os

for img in  ["images/"+k for k in os.listdir("images")]:
    score_fr = iqa_metric(img,img)
    print(img,score_fr)

nkjulia avatar Jul 03 '23 10:07 nkjulia

This is expected because you are using full-reference metrics: psnr, lpips. Full-reference metrics require a high quality reference image, and measures the difference to the reference. The distance between the same image is zero.

If you do not have high quality reference, you should use the no-reference metrics, you may list available no-reference metrics with:

pyiqa.list_models(metric_mode='NR')

chaofengc avatar Jul 03 '23 11:07 chaofengc

This is expected because you are using full-reference metrics: psnr, lpips. Full-reference metrics require a high quality reference image, and measures the difference to the reference. The distance between the same image is zero.

If you do not have high quality reference, you should use the no-reference metrics, you may list available no-reference metrics with:

pyiqa.list_models(metric_mode='NR')

Hi, i have a question, what's the max and min value of pieApp in two reference and generated images? I've got -0.0334 for one of my experiments, is it ok? (i got -6.8 for two same images (img,img).

shshojaei avatar Jul 03 '23 16:07 shshojaei

thanks i tried the NR metric, but i am wondering how to choose the best metric.any suggestion?

>>> pyiqa.list_models(metric_mode='NR') ['brisque', 'clipiqa', 'clipiqa+', 'clipiqa+_rn50_512', 'clipiqa+_vitL14_512', 'clipscore', 'cnniqa', 'dbcnn', 'entropy', 'fid', 'hyperiqa', 'ilniqe', 'maniqa', 'maniqa-kadid', 'maniqa-koniq', 'musiq', 'musiq-ava', 'musiq-koniq', 'musiq-paq2piq', 'musiq-spaq', 'nima', 'nima-vgg16-ava', 'niqe', 'nrqm', 'paq2piq', 'pi', 'tres', 'tres-flive', 'tres-koniq', 'uranker']

nkjulia avatar Jul 04 '23 02:07 nkjulia

@shshojaei

That is ok for pieapp because it used an extra regression layer for the final results. And it makes pieapp output higher results for good image. There is no mathematical bound for the results of pieapp.

chaofengc avatar Jul 04 '23 10:07 chaofengc

@nkjulia You may refer to the benchmark results under tests dir for performance of these metrics under different benchmarks and choose the one you need.

I would recommend clipiqa+ if you have difficulties in selecting suitable metric.

chaofengc avatar Jul 04 '23 10:07 chaofengc

@nkjulia You may refer to the benchmark results under tests dir for performance of these metrics under different benchmarks and choose the one you need.

I would recommend clipiqa+ if you have difficulties in selecting suitable metric.

thanks.is there any model introduction doc?

nkjulia avatar Jul 05 '23 02:07 nkjulia

Sorry that I am not available to write a comprehensive summary because there are too many related papers. I have listed these works here: https://github.com/chaofengc/Awesome-Image-Quality-Assessment You may refer to specific papers if interested.

chaofengc avatar Jul 05 '23 07:07 chaofengc

Sorry that I am not available to write a comprehensive summary because there are too many related papers. I have listed these works here: https://github.com/chaofengc/Awesome-Image-Quality-Assessment You may refer to specific papers if interested.

thx Great Job!!

nkjulia avatar Jul 17 '23 01:07 nkjulia

Sorry that I am not available to write a comprehensive summary because there are too many related papers. I have listed these works here: https://github.com/chaofengc/Awesome-Image-Quality-Assessment You may refer to specific papers if interested.

Can these NR-IQA metrics be use to do image aesthetic evaluation?

nkjulia avatar Jul 17 '23 01:07 nkjulia

The deep learning metrics are closely related with the training dataset. The AVA dataset is the main aesthetic dataset now. And our toolbox has some models trained on AVA. You may use musiq-ava metric for aesthetic evaluation.

It is not a good practice to do aesthetic evaluation with metrics that are not trained on `AVA.

chaofengc avatar Jul 17 '23 09:07 chaofengc