kdnet.pytorch icon indicating copy to clipboard operation
kdnet.pytorch copied to clipboard

Error about the size of tensor

Open ywyue opened this issue 4 years ago • 7 comments

Hi, Thanks for your great work. When I ran the train.py, it gave me an error: {'Airplane': 0, 'Bag': 1, 'Cap': 2, 'Car': 3, 'Chair': 4, 'Earphone': 5, 'Guitar': 6, 'Knife': 7, 'Lamp': 8, 'Laptop': 9, 'Motorbike': 10, 'Mug': 11, 'Pistol': 12, 'Rocket': 13, 'Skateboard': 14, 'Table': 15} 16 15990 Traceback (most recent call last): File "train.py", line 89, in pred = net(points_v, cutdim_v) File "/home/ywyue/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in call result = self.forward(*input, **kwargs) File "train.py", line 45, in forward x1 = kdconv(x, 2048, 8, c[-1], self.conv1) File "train.py", line 37, in kdconv sel = Variable(sel + (torch.arange(0, dim) * 3).long()) RuntimeError: The size of tensor a (2) must match the size of tensor b (2048) at non-singleton dimension 0

I am using pytorch 1.1.0, but I don't think this error was caused by the pytorch version. Could you help me with that? Thanks.

ywyue avatar Aug 02 '19 22:08 ywyue

I met the same error, what's the meaning of "sel = Variable(sel + (torch.arange(0, dim) * 3).long())"? The size of 2 tensors doesn't fit each other

sausagecy avatar Sep 23 '19 03:09 sausagecy

I also met the same error. Is there someone who has solved this problem? Thanks! : )

ty625911724 avatar Jun 21 '20 12:06 ty625911724

I also met the same error. Is there someone who has solved this problem? Thanks! : )

Maybe this problem could be solved by changing the KD networks code. The order of c is c[-1],c[-2],..,c[-11], maybe you can reverse the order, like c[-11],c[-10],...,c[-1].

ty625911724 avatar Jun 22 '20 11:06 ty625911724

The code was tested with pytorch==0.3.1 and needs to be adapted for higher pytorch version.

fxia22 avatar Jun 22 '20 16:06 fxia22

@fxia22 RuntimeError: inconsistent tensor size, expected r_ [2], t [2] and src [2048] to have the same number of elements, but got 2, 2 and 2048 elements respectively at /pytorch/torch/lib/TH/generic/THTensorMath.c:1021

I got the very same error message with multiple torch versions. (0.3.1.post2, 0.3.1, 0.3.0, 1.0.0)

fpshuang avatar Aug 28 '20 08:08 fpshuang

@fxia22 RuntimeError: inconsistent tensor size, expected r_ [2], t [2] and src [2048] to have the same number of elements, but got 2, 2 and 2048 elements respectively at /pytorch/torch/lib/TH/generic/THTensorMath.c:1021

I got the very same error message with multiple torch versions. (0.3.1.post2, 0.3.1, 0.3.0, 1.0.0)

I got the same error with torch version is 0.3.1

ch19971118 avatar Jun 23 '21 06:06 ch19971118

The following method can solve this problem correctly (https://github.com/fxia22/kdnet.pytorch/issues/11#issuecomment-647470761).

@ty625911724

Maybe this problem could be solved by changing the KD networks code. The order of c is c[-1],c[-2],..,c[-11], maybe you can reverse the order, like c[-11],c[-10],...,c[-1].

Alternatively, the class KDNet(nn.Module): can be modified as follows to solve this problem.

class KDNet(nn.Module):
    def __init__(self, k=16):
        super(KDNet, self).__init__()
        self.conv1 = nn.Conv1d(3, 8 * 3, 1, 1)
        self.conv2 = nn.Conv1d(8, 32 * 3, 1, 1)
        self.conv3 = nn.Conv1d(32, 64 * 3, 1, 1)
        self.conv4 = nn.Conv1d(64, 64 * 3, 1, 1)
        self.conv5 = nn.Conv1d(64, 64 * 3, 1, 1)
        self.conv6 = nn.Conv1d(64, 128 * 3, 1, 1)
        self.conv7 = nn.Conv1d(128, 256 * 3, 1, 1)
        self.conv8 = nn.Conv1d(256, 512 * 3, 1, 1)
        self.conv9 = nn.Conv1d(512, 512 * 3, 1, 1)
        self.conv10 = nn.Conv1d(512, 512 * 3, 1, 1)
        self.conv11 = nn.Conv1d(512, 1024 * 3, 1, 1)
        self.fc = nn.Linear(1024, k)

    def forward(self, x, c):
        def kdconv(x, dim, featdim, sel, conv):
            batchsize = x.size(0)
            # print(batchsize)
            x = F.relu(conv(x))
            x = x.view(-1, featdim, 3, dim)
            x = x.view(-1, featdim, 3 * dim)
            sel = Variable(sel + (torch.arange(0, dim) * 3).long())
            if x.is_cuda:
                sel = sel.cuda()
            x = torch.index_select(x, dim=2, index=sel)
            x = x.view(-1, featdim, int(dim / 2), 2)
            x = torch.squeeze(torch.max(x, dim=-1, keepdim=True)[0], 3)
            return x

        x1 = kdconv(x, 2048, 8, c[0], self.conv1)
        x2 = kdconv(x1, 1024, 32, c[1], self.conv2)
        x3 = kdconv(x2, 512, 64, c[2], self.conv3)
        x4 = kdconv(x3, 256, 64, c[3], self.conv4)
        x5 = kdconv(x4, 128, 64, c[4], self.conv5)
        x6 = kdconv(x5, 64, 128, c[5], self.conv6)
        x7 = kdconv(x6, 32, 256, c[6], self.conv7)
        x8 = kdconv(x7, 16, 512, c[7], self.conv8)
        x9 = kdconv(x8, 8, 512, c[8], self.conv9)
        x10 = kdconv(x9, 4, 512, c[9], self.conv10)
        x11 = kdconv(x10, 2, 1024, c[10], self.conv11)
        x11 = x11.view(-1, 1024)
        out = F.log_softmax(self.fc(x11))
        return out

For more details about KD-Net, you can also refer to this link. (https://aistudio.baidu.com/aistudio/projectdetail/1494544)

dhliubj avatar Jul 05 '22 03:07 dhliubj