pytorch-vdsr icon indicating copy to clipboard operation
pytorch-vdsr copied to clipboard

Performance of given model and trained model

Open chaos5958 opened this issue 7 years ago • 2 comments

Hello, I tested the given model and the model I trained using default parameters you provide. Also, I constructed test dataset using matlab, and check bicubic performance is reproduced as the paper. However, vdsr performance is a little bit different form you provided in README.pd.

Besides, I implemented VDSR once tensorflow+PIL, but results are too bad, so I'm currently testing user repo using pytorch+Matlab. Anyway, thank you for the code!

  1. Why you 'shave_border' in evaluating PSNR? Is it norm in the super resolution research?

  2. How could I reproduce your result? (e.g. better than paper)

  3. Should I do bicubic interpolation on normalized value (e.g. 0-1)? The result seems a little bit different. It's a little bit strange that, in the training stage, bicubic is done in double value, but ,in the test stage, bicubic is done in integer value.

Set 5, Scale 4, shave_border = 0
bicubic = 28.422
vdsr (given) = 30.797
vdsr (trained) = 30.651

Set 5, Scale 4, shave_border = 4
bicubic = 28.414
vdsr (given) = 30.880
vdsr (trained) = 30.727
vdsr (README) = 31.35 (I want to reproduce this one!)

Here is part of my test code.

for filename in filenames:

     im_gt_y = sio.loadmat(filename)['im_label']
     im_b_y = sio.loadmat(filename)['im_input']

     #im_gt_y = sio.loadmat("data/Set5/" + opt.image + "_SC4_CH1.mat")['im_label']
     #im_b_y = sio.loadmat("data/Set5/" + opt.image + "_SC4_CH1.mat")['im_input']

     im_input = im_b_y/255.
     im_input = Variable(torch.from_numpy(im_input).float()).view(1, -1, im_input.shape[0], im_input.shape[1])

     if cuda:
         im_input = im_input.cuda()
     else:
         im_input = im_input.cpu()

     start_time = time.time()
     im_result_y = model(im_input)
     im_result_y = im_result_y.cpu()
     elapsed_time = time.time() - start_time
     print("Forward Time: {:.5f}sec".format(elapsed_time))

     im_h_y = im_result_y.data[0].numpy().astype(float)
     im_gt_y = im_gt_y.astype(float)
     im_b_y = im_b_y.astype(float)

     im_h_y = im_h_y*255.
     im_h_y[im_h_y<0] = 0
     im_h_y[im_h_y>255.] = 255.
     im_h_y = im_h_y[0,:,:]

     psnr_bicubic = PSNR(im_gt_y, im_b_y, shave_border=opt.scale)
     print(psnr_bicubic)
     psnr_sr = PSNR(im_gt_y, im_h_y, shave_border=opt.scale)
     print(psnr_sr)

     total_psnr_bicubic += psnr_bicubic
     total_psnr_sr += psnr_sr

 print("{} Scale {}] PSNR_bicubic={:.3f}".format(opt.dataset, opt.scale, total_psnr_bicubic / len(filenames)))
 print("{} Scale {}] PSNR_sr={:.3f}".format(opt.dataset, opt.scale, total_psnr_sr/ len(filenames)))

Thank you, Hyunho

chaos5958 avatar Dec 30 '17 04:12 chaos5958

Hi @chaos5958 , please refer https://github.com/twtygqyy/pytorch-SRDenseNet/blob/master/eval.py Let me know if you have further problem.

twtygqyy avatar Jan 04 '18 05:01 twtygqyy

@chaos5958 hi i wonder if you solved that problem. for me it also happens, the difference between pre trainded weight and trained data which i did. @twtygqyy hi i want your help...how can i get the PSNR as your weight.

designproj avatar Nov 30 '18 08:11 designproj