Longguang Wang
Longguang Wang
Hi @nicetomeetu21, thanks for reporting this issue and we have fixed it.
Hi @btwbtm, thanks for your interest in our work. Softmax is also used in several network quantization or pruning methods to soften one-hot distributions. In my opinion, softmax may also...
Hi @afefbnsaid, thanks for you interest in our work. Please provide more details about your training data and the error information (e.g., which line in which file) such that we...
Hi @greatlog, please refer to Tab. II in the supplemental material.
Hi @Salmashi, thanks for your interest in our work. We do not need to crop the HR images before training. Patches will be cropped online in the dataloader (https://github.com/LongguangWang/DASR/blob/main/data/multiscalesrdata.py#L157)
Hi @jiahong-fu, we extract two patches from one image to construct positive pairs (with the same degradation) for contrastive learning, as illustrated in Fig. 1.
Hi @sujyQ, we will fix this bug in an upcoming update.
Hi @CHUANGQIJI, 前100个epoch只在训练encoder部分,所以GPU利用率低一些,后面应该就会比较大了。
Hi @mumu-chen, thanks for your interest in our work. Our network was trained on a PC with two GTX 2080Ti GPUs (22G memory).
@ymtupup 你好,在噪声强度10,lambda1/lambda2分别设置0.2/4.0的时候,这个结果比较正常。这个退化设置跟论文中表3里倒数第4列最后一行的设置比较接近,性能也差不多。 另外发现`trainer.py`里有一个bug,非对称退化的参数在测试的时候没有传入`util.SRMDPreprocessing`,导致设置不同参数进行测试时结果都一样。我们也对这一bug进行了修复。