Kai Zhang
Kai Zhang
@XSLXANDY IRCNN is a model-based optimization method. It is not an end-to-end training method. IRCNN plugs the CNN denoisers into the HQS inference. So, you only need to train the...
The estimated blur kernels from Deblur_set1 are given by [Xu](http://webdav.is.mpg.de/pixel/benchmark4camerashake/). The others are given by [14_text_deblurring_code, and16_cvpr_dark_channel_deblur_code_v1](https://github.com/rgbitx/image_deblur_code).
https://github.com/cszn/KAIR
You can refer to the pytorch implementation: http://www.ipol.im/pub/pre/231/ffdnet-pytorch.zip http://www.ipol.im/pub/pre/231/
The 3-channel image is first reshaped to 12-channel sub-images by a PixelUnshuffle layer.
See https://github.com/cszn/FFDNet/blob/master/Demo_multivariate_Gaussian_noise.m @aGIToz
https://github.com/matri123/IRCNN_deblur_Keras
FFDNet+ means “Multiscale FFDNet”. The results of FFDNet+ with manually selected uniform noise level map on [DND](https://noise.visinf.tu-darmstadt.de/benchmark/#results_srgb) dataset can be downloaded from https://drive.google.com/drive/folders/1OlBxmWMH8GZNts8DnMfnb-gY7kgx1bpE
Check the bicubic kernel as a reference, see https://github.com/cszn/USRNet
Make sure you also provide the blur kernel because IRCNN takes the blurred image and blur kernel as input. For your own blurred image, please estimate the blur kernel by...