Qilei Li
Qilei Li
> Thanks for the nice work, and very well documentation. > > I have some real low-resolution images without any ground-truth. I wanted to test your state-of-the-art MZSR model on...
> I have the same problem, and I can't use my own image to generate hyperresolution images, and I guarantee that gt is twice as much input as input. And...
I noticed that you have released four models, they are: 1. Bicubicx2; 2. Directx2; 3. Directx4; 4. Multi-scale. If I want to train the network to get these models, could...
> Based on default setting, you will get **2. Directx2** model. > > 1. To obtain Bicubic model, you need to change **line 30** of dataGenerator.py by adding bicubic downscaling...
Here is the config for x2 BIcubic with 100000 iters. `clean_img_LR=imresize(img_HR,scale=1./scale, kernel=Kernel, ds_method='bicubic')` `python main.py --train --gpu 0 --trial 0 --step 0` After training 100000 iters, I tested the trained...
Thanks for your comment. By setting `if gradients(patch_l.astype(np.float64)/255.) >= 0.005 and np.var(patch_l.astype(np.float64)/255.) >= 0.03`, I do get a dataset with 3.37GB. I will try to train the model using this...
Your json file sees ok, but the results are abnormal, which is also wired for me.
GMFN was trained using other code, but the network file, i.e. gmfn_arch.py kept same. Finally, we incorporated it into SRFBN. BTW, how about the GPU usage when you re-train?