RLFN
RLFN copied to clipboard
Winner of runtime track in NTIRE 2022 challenge on Efficient Super-Resolution
Hello, [MSU Graphics & Media Lab Video Group](https://videoprocessing.ai/) has recently launched two new Super-Resolution Benchmarks. * [Video Upscalers Benchmark: Quality Enhancement](https://videoprocessing.ai/benchmarks/video-upscalers.html) determines the best upscaling methods for increasing video resolution...
Hi, thanks for your great work! I have a question about contrastive loss. What does (Ypos, Yneg, Yanchor) stand for?
Hi: 请问下model/rlfn_ntire.py中class ESA中的self.conv_f的作用是什么,在论文中Figure 3的ESA的结构中没有这个卷积层?
训练模型
请问可不可以提供一下×2 的训练模型呀,非常感谢 然后论文里面一些其他的方法的预训练模型方便分享吗
How is the structure of a feature extractor ? Is there just one Conv k3s1-Tanh-Conv k3s1 ?
Thanks for sharing your code. I want to reproduce your work, but I can't find code about contrastive loss and contrastive module. Can you provide it?
Question
How to visualize the extracted features?
Benchmark like Set5, Set14 and so on. According to the paper, the initial learning rate is 5e-4, and is halved every 2e5 iterations, but the total number of iterations is...
rlfn_ntrie_x2.pth need, please. 谢谢