pytorch-semseg icon indicating copy to clipboard operation
pytorch-semseg copied to clipboard

RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 46 and 47 in dimension 3 at /opt/conda/conda-bld/pytorch_1550780889552/work/aten/src/THC/generic/THCTensorMath.cu:83

Open li0128 opened this issue 6 years ago • 5 comments

$ python3 train.py RUNDIR: runs/unet_pascal/82417 Traceback (most recent call last): File "train.py", line 231, in train(cfg, writer, logger) File "train.py", line 129, in train outputs = model(images) File "/home/lxc/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, **kwargs) File "/home/lxc/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 143, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/lxc/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 153, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/lxc/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 83, in parallel_apply raise output File "/home/lxc/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py", line 59, in _worker output = module(*input, **kwargs) File "/home/lxc/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, **kwargs) File "/home/lxc/PythonProjects/pytorch-semseg-master/ptsemseg/models/unet.py", line 57, in forward up4 = self.up_concat4(conv4, center) File "/home/lxc/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call result = self.forward(*input, **kwargs) File "/home/lxc/PythonProjects/pytorch-semseg-master/ptsemseg/models/utils.py", line 205, in forward return self.conv(torch.cat([outputs1, outputs2], 1)) RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 46 and 47 in dimension 3 at /opt/conda/conda-bld/pytorch_1550780889552/work/aten/src/THC/generic/THCTensorMath.cu:83

li0128 avatar Feb 26 '19 14:02 li0128

do you solve it?

wackxu avatar Mar 06 '19 13:03 wackxu

torch.cat([outputs1,outputs2], 1) two tensor size not equal , offset // 2 Odd rounding down

Cverlpeng avatar Apr 01 '19 14:04 Cverlpeng

Hello, have you solved this problem?

Qirui-Y avatar May 05 '19 06:05 Qirui-Y

I see this error hen I train Unet using ade20k: `RUNDIR: runs/ade20k/27232 Traceback (most recent call last): File "/home/yubao/data/share/SpacialAI/SemanticLabeling/train.py", line 230, in train(cfg, writer, logger) File "/home/yubao/data/share/SpacialAI/SemanticLabeling/train.py", line 127, in train outputs = model(images) File "/home/yubao/data/software/anaconda2/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/yubao/data/software/anaconda2/envs/pytorch/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 121, in forward return self.module(*inputs[0], **kwargs[0]) File "/home/yubao/data/software/anaconda2/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/yubao/data/share/SpacialAI/SemanticLabeling/ptsemseg/models/unet.py", line 58, in forward up3 = self.up_concat3(conv3, up4) File "/home/yubao/data/software/anaconda2/envs/pytorch/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/home/yubao/data/share/SpacialAI/SemanticLabeling/ptsemseg/models/utils.py", line 205, in forward return self.conv(torch.cat([outputs1, outputs2], 1)) RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 152 and 151 in dimension 2 at /pytorch/aten/src/THC/generic/THCTensorMath.cu:87

Process finished with exit code 1`

yubaoliu avatar May 27 '19 06:05 yubaoliu

@li0128 @yubaoliu hey, I come across the same problem and solved it by change the file: utils.py in around 205:

class unetUp(nn.Module):
    def __init__(self, in_size, out_size, is_deconv):
        super(unetUp, self).__init__()
        self.conv = unetConv2(in_size, out_size, False)
        if is_deconv:
            self.up = nn.ConvTranspose2d(in_size, out_size, kernel_size=2, stride=2)
        else:
            self.up = nn.UpsamplingBilinear2d(scale_factor=2)

    def forward(self, inputs1, inputs2):
        outputs2 = self.up(inputs2)
        offsetY = outputs2.size()[2] - inputs1.size()[2]
        offsetX = outputs2.size()[3] - inputs1.size()[3]
        # padding = 2 * [offset // 2, offset // 2]
        # outputs1 = F.pad(inputs1, padding)
        outputs1 = nn.functional.pad(inputs1, (offsetX // 2, offsetX - offsetX//2,
                                    offsetY // 2, offsetY - offsetY//2))
        return self.conv(torch.cat([outputs1, outputs2], dim=1))

pprp avatar Nov 25 '19 01:11 pprp