NeuriCam icon indicating copy to clipboard operation
NeuriCam copied to clipboard

Running evaluation code without training

Open AtiqEmenent opened this issue 3 years ago • 2 comments

I am trying to perform evaluation task for your code without doing the trainig process for myself. I am using pre-trained model ([pretrained.pth.tar]) & spynet weights available in the Evaluation section of this GitHub page. But I am not getting results (for evaluation) as described in the paper. I have genereated LR images using cv2.resize() with cv2.INTER_CUBIC as the interpolation parameter (image alreday converted to LAB color scheme using cv2.COLOR_BGR2LAB ). Should i have to re-train the model myself or should i generate LR frames using MatlaB resize(as told in the paper) or I am doing some other mistake. Please guide.

My results example: For the 'walk' video frames of the dataset Vid4 i am getting PSNR-RGB of about 21.08.

Evalution command i used is : !python evaluate.py --lr_dir=lr-set-lab --key_dir=key-set --target_dir=hr-set --output_dir=sr-set --model_dir=experiments/bix4_keyvsrc_attn --restore_file=pretrained --file_fmt="frame%d.png"

AtiqEmenent avatar Feb 24 '23 05:02 AtiqEmenent

Hi @AtiqEmenent, Matlab resize performs anti-aliasing along with the interpolation, which gives very different results compared to cv2.resize. This was something I stumbled upon during the development as well. But most of the prior works use Matlab's function, so I used the same for easier comparison. Here's a reference Matlab function for bicubic downsampling. Also make sure normalization is similar to what the pretrained model uses here during the RGB to lab conversion.

vb000 avatar Feb 24 '23 18:02 vb000

hi, I want to know if all input images need to be converted to the LAB color space.

Justarrrrr avatar Apr 18 '24 02:04 Justarrrrr