PyTorch-SRGAN
PyTorch-SRGAN copied to clipboard
high resolution images of dimension 96X96?
How did you get 96X96 high resolution REAL images for comparison? My understanding was that we take the original dataset (cifar-10, consisting images of dimension 32X32), down-sample the images to (say 4 times, i.e 8X8) and use this down-sampled and corresponding original version for the training pair.
While testing we will feed some down-sampled (i.e 8X8) and expect a 32X32 image which is very close to corresponding original 32X32 image. How come your outputs are 96X96? It seems you first up-scaled the images and then down-sampled. Will it not affect the quality of the output?
Hi, the results for the 96x96 resolution were trained using ImageNet, not CIFAR-10. Doing it as you say would be silly as you correctly guess.
also did you tune the beta in swish activation function?