raindrop313
raindrop313
Hi, Thank you for sharing this code with us. However, I was confused with the axial rotary embeddings in rotary_embedding_torch.py file. " elif freqs_for == 'pixel': freqs = torch.linspace(1., max_freq...
I have modified the model of GLEAN, which can be trained normally and will periodically make inference on the validation set, but when the inference is completed, an error is...
感谢你的分享,我对于平均池化的局部性有一些疑问。 您的工作应该是想要减小推理和训练的不一致性,当推理阶段图片大于训练的图片时,根据我对论文的理解,对于平均池化,会进行一个局部的池化,应该用卷积的方式完成。例如训练用64*64的patch,推理用256*256的图片,那么应该池化核应该为96(64*1.5),但是您开源的代码计算方法如下: “”“ self.kernel_size[0] = x.shape[2]*self.base_size[0]//self.train_size[-2] self.kernel_size[1] = x.shape[3]*self.base_size[1]//self.train_size[-1] ”“” 由于base_size是train_size的1.5倍,那么池化核不是永远都是输入特征图x的1.5倍吗,好像和local的思想不同。 也许是我理解有误,恳请指点一二,不胜感激
At 24x super resolution, using matlab's downsampling and upsampling does not give 1024 inputs, but I see that the paper releases results for X24, how should I run the model...
Hello, I see that there are only the training models of SOTS in your Google Drive, can you post the training model of O-Haze? thanks.