ECCV2018_CrossNet_RefSR icon indicating copy to clipboard operation
ECCV2018_CrossNet_RefSR copied to clipboard

What's the difference between the three model of MultiscaleWarpingNet?

Open BKZero opened this issue 5 years ago • 0 comments

i see the training code writes: net_pred = net(buff,mode = 'input_img1_HR') and i lookup the model definition and i see: if mode == 'input_img2_LR': input_img2_LR = torch.from_numpy(buff['input_img2_LR']).cuda() flow = self.FlowNet(input_img1_LR, input_img2_LR) elif mode == 'input_img2_HR': flow = self.FlowNet(input_img1_LR, input_img2_HR) elif mode == 'input_img1_HR': flow = self.FlowNet(input_img1_HR, input_img2_HR) as far as i understood, input_img1_LR is the low resolution image, input_img1_HR is the corresponding groundtruth of input_img1_LR, and the input_img2_HR is the high resolution reference image, am i right? so when i use the model in application, i can only get input_img1_LR and input_img2_HR, but when you set the mode as 'input_img1_HR', the flow is estimated from input_img1_HR and input_img2_HR. Did I make some mistake on understanding the code? Or is there some bug in the code?

BKZero avatar Oct 08 '19 07:10 BKZero