RAUNet
RAUNet copied to clipboard
Issues with shape in AAM Module.
Hi, I have this issue. I am trying to train a pupil segmentation using your RAUNet neural network, with images of 320x640 pixels. I added two lines to forward from AAN class as follows:
def forward(self, input_high, input_low): mid_high=self.global_pooling(input_high) weight_high=self.conv1(mid_high)
mid_low = self.global_pooling(input_low)
weight_low = self.conv2(mid_low)
weight=self.conv3(weight_low+weight_high)
low = self.conv4(input_low)
print((input_high).shape)
print((low.mul(weight)).shape)
return input_high+low.mul(weight)
The print results give me:
torch.Size([8, 256, 8, 10]) torch.Size([8, 256, 8, 10]) torch.Size([8, 128, 16, 20]) torch.Size([8, 128, 16, 20]) torch.Size([8, 64, 32, 40]) torch.Size([8, 64, 31, 40])
And the output error is:
RuntimeError Traceback (most recent call last)
d:\vhcg77\Anaconda3\envs\raunet\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(),
d:\vhcg77\Anaconda3\envs\raunet\lib\site-packages\torch\nn\parallel\data_parallel.py in forward(self, *inputs, **kwargs) 157 inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids) 158 if len(self.device_ids) == 1: --> 159 return self.module(*inputs[0], **kwargs[0]) 160 replicas = self.replicate(self.module, self.device_ids[:len(inputs)]) 161 outputs = self.parallel_apply(replicas, inputs, kwargs)
d:\vhcg77\Anaconda3\envs\raunet\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(),
d:\vhcg77\OneDrive\iris_project\RAUNet\RAUNet.py in forward(self, x) 96 b3 = self.gau2(d3, e2) 97 d2 = self.decoder2(b3) ---> 98 b2 = self.gau1(d2, e1) 99 d1 = self.decoder1(b2) 100
d:\vhcg77\Anaconda3\envs\raunet\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(),
d:\vhcg77\OneDrive\iris_project\RAUNet\RAUNet.py in forward(self, input_high, input_low) 39 print((input_high).shape) 40 print((low.mul(weight)).shape) ---> 41 return input_high+low.mul(weight) 42 43 class RAUNet(nn.Module):
RuntimeError: The size of tensor a (32) must match the size of tensor b (31) at non-singleton dimension 2
What can I do?
You can remove padding in the load_dataset.
You can remove padding in the load_dataset.
RuntimeError: The size of tensor a (136) must match the size of tensor b (135) at non-singleton dimension 2 I used my data and other codes, the model is your code,how to slove?
The size of the input image should be divisible by 32.