EDSR/RDN backbone.
Hello,
In the paper you reported results only with SwinIR as the backbone. I was wondering whether you also ran experiments with EDSR or RDN backbones. Methods such as LIIF, LTE, and CiaoSR publish their results with EDSR/RDN, so I’d like to compare your results under identical settings. I couldn’t find such information in the repository or the paper—if you have those results, could you please share them?
I also tried reproducing your work with an EDSR backbone using the implementation details in the paper(256 × 256 GT patch, Adam, L1 etc...), but the performance didn’t meet my expectations. Do you have any training tips or recommendations?
By the way, I tested your HAT + Continuous Gaussian pre-trained model and was impressed by how well it performed. Thank you for the outstanding research!
Thanks for your interest in our work!
We also conducted experiments on the EDSR and RDN backbones and achieved good performance. However, since the experiments were completed within the company, the pre-trained weight files may not be available. You are welcome to reproduce the results. Based on our experience, adding pre-trained weights to the backbone before training is beneficial for model optimization.
By the way, we recommend using larger backbones to enhance the model's performance.