Pyramid-Attention-Networks icon indicating copy to clipboard operation
Pyramid-Attention-Networks copied to clipboard

[Preprint] Pyramid Attention Networks for Image Restoration: new SOTA results on multiple image restoration tasks: denoising, demosaicing, compression artifact reduction, super-resolution

Results 7 Pyramid-Attention-Networks issues
Sort by recently updated
recently updated
newest added

您好,很感谢你们团队的工作,有一个问题想请教一下,在论文里没有提及训练的细节,请问预训练模型N50代表的是将DIV2K数据集中的原图缩小为50倍吗?因为我从Readme.md页面里的数据集链接下载的数据集只有缩放2x,3x,4x,并不是像你们给出的10x,30x,50x...

你好,pyramid attention 是不是特别占显存,单卡测试时能处理大尺度图像?

从谷歌云盘下载2倍的超分辨率预训练模型,测试Set5结果PSNR只有30.460 --model PAEDSR --data_test Set5 --save_results --rgb_range 1 --data_range 801-900 --scale 2 --n_feats 256 --n_resblocks 32 --res_scale 0.1 --pre_train D:\PANSR\pretrain_model\model_x2.pt --test_only --chop

您好,看了attention代码,对卷积和反卷积那里还是不太懂,可以解释一下嘛,还有所有尺度的block加起来为啥么是H*W

When I run "python main.py --model PAEDSR --data_test Set5+Set14+B100+Urban100 --save_results --rgb_range 1 --data_range 801-900 --scale 2 --n_feats 256 --n_resblocks 32 --res_scale 0.1 --pre_train ../model_x2.pt --test_only --chop" for test it gives...

I'm using your model; in particular, I use the CAR models. I download the DIV2K dataset integrally but inside there arent the x10, x20, x30, x40 folders but x2, x3,...

DIV2K_train_LR_bicubic和DIV2K_valid_LR_bicubic的X10-X40该如何获取呢。