HPSG-Neural-Parser icon indicating copy to clipboard operation
HPSG-Neural-Parser copied to clipboard

Runtime Error: The size of tensor a (1100) must match the size of tensor b (695) at non-singleton dimension 0

Open tanvidadu opened this issue 5 years ago • 7 comments

Hey @DoodleJZ I came across this error while running your parser. Could please look into this and fix this ?

Traceback (most recent call last): File "/content/drive/My Drive/Hd/DependencyParser/src_joint/main.py", line 746, in main() File "/content/drive/My Drive/Hd/DependencyParser/src_joint/main.py", line 742, in main args.callback(args) File "/content/drive/My Drive/Hd/DependencyParser/src_joint/main.py", line 672, in run_parse syntree, _ = parser.parse_batch(subbatch_sentences) File "/content/drive/My Drive/Hd/DependencyParser/src_joint/Zparser.py", line 1364, in parse_batch extra_content_annotations=extra_content_annotations) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, **kwargs) File "/content/drive/My Drive/Hd/DependencyParser/src_joint/Zparser.py", line 822, in forward res, current_attns = attn(res, batch_idxs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 541, in call result = self.forward(*input, **kwargs) File "/content/drive/My Drive/Hd/DependencyParser/src_joint/Zparser.py", line 344, in forward return self.layer_norm(outputs + residual), attns_padded RuntimeError: The size of tensor a (1100) must match the size of tensor b (695) at non-singleton dimension 0

tanvidadu avatar Nov 05 '19 18:11 tanvidadu

I encounter the same issue, @DoodleJZ @tanvidadu Anyone has fixed this problem?

wujsAct avatar Feb 17 '20 01:02 wujsAct

The dimensions do not match between d_model and the sum of d_tag, d_word and d_char if you concatenate all the embedding may be. You can check each part of the embedding dimension to find the problem easily.

DoodleJZ avatar Feb 17 '20 03:02 DoodleJZ

@DoodleJZ, I do not use d_tag and d_char embedding~ I run this code with python 3.6 with pytorch 0.4.0, however,

The error is: for residual shape = (packed_len, d_model), however, the shape of outputs = (self.batch_size* self.max_len, d_model)? Mismatching comes from the first dimension, not the second dimension. I compress glove.6B.100.txt into glove.gz and I can not confirm oov: 18820 is NORMAL for running test.sh.

Loading model from models/cwt.pt... loading embedding: glove from data/glove.gz oov: 18820 Reading dependency parsing data from data/ptb_test_3.3.0.sd Loading test trees from data/23.auto.clean... Loaded 2,416 test examples. Parsing test sentences... packed_len: 2501 sentences: 100 torch.Size([2501]) self.batch_size 100 self.max_len: 50 residual: torch.Size([2501, 1024]) v_padded: torch.Size([800, 50, 64]) outputs_padded: torch.Size([800, 50, 64]) outputs = outputs_padded[output_mask]: torch.Size([40000, 64]) d_v1: 32 outputs = self.combine_v(outputs): torch.Size([5000, 1024]) outputs = self.residual_dropout(outputs,batch_idxs): torch.Size([5000, 1024]) Traceback (most recent call last): File "src_joint/main.py", line 746, in main() File "src_joint/main.py", line 742, in main args.callback(args) File "src_joint/main.py", line 577, in run_test predicted, _, = parser.parse_batch(subbatch_sentences) File "/home/LAB/wujs/word_pos/HPSG-Neural-Parser/src_joint/Zparser.py", line 1370, in parse_batch extra_content_annotations=extra_content_annotations) File "/home/LAB/wujs/software/pytorch-0.4/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, **kwargs) File "/home/LAB/wujs/word_pos/HPSG-Neural-Parser/src_joint/Zparser.py", line 827, in forward res, current_attns = attn(res, batch_idxs) File "/home/LAB/wujs/software/pytorch-0.4/lib/python3.6/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, **kwargs) File "/home/LAB/wujs/word_pos/HPSG-Neural-Parser/src_joint/Zparser.py", line 349, in forward return self.layer_norm(outputs + residual), attns_padded RuntimeError: The size of tensor a (5000) must match the size of tensor b (2501) at non-singleton dimension 0 srun: error: dell-gpu-32: task 0: Exited with exit code 1

wujsAct avatar Feb 17 '20 04:02 wujsAct

Maybe you need to try pytorch >= 1.0.0, the error is output_mask whichl occurs when the version of pytorch is not matched.

DoodleJZ avatar Feb 17 '20 05:02 DoodleJZ

@DoodleJZ Sorry to bother you. As I'm not familiar with PyTorch so much. Now, I just change torch_t.ByteTensor into torch_t.BoolTensor as follows. And now everything is perfect~ def pad_and_rearrange(....): invalid_mask = torch_t.BoolTensor(mb_size, len_padded)._fill(True)

wujsAct avatar Feb 17 '20 16:02 wujsAct

@DoodleJZ �Sorry to bother you. As I'm not familiar with PyTorch so much. Now, I just change torch_t.ByteTensor into torch_t.BoolTensor as follows. And now everything is perfect~ def pad_and_rearrange(....): invalid_mask = torch_t.BoolTensor(mb_size, len_padded)._fill(True)

it works perfectly!

CoyoteLeo avatar May 28 '20 17:05 CoyoteLeo

Hi @wujsAct, @CoyoteLeo, @tanvidadu. I am afraid that I have a similar problem:

I have two tensors that I want to add, one is a noise tensor of shape N x 1 x 64 x 64, and the other tensor has the same shape. Now things work fine until the very last batch index, it seems, when the programs stops and complains with the following error message: RuntimeError: The size of tensor a (96) must match the size of tensor b (128) at non-singleton dimension 0.

Now, here is a bit of code:

` for batch_idx, (real_images, targets) in enumerate(train_loader):

 noise_disc = -torch.rand(size = (batch_size, 1, 64, 64))/5
 noise_disc = noise_disc.to(device)

 real_images = real_images.to(device) # shape: (batch_size, 1, 64, 64)
 images_disc = (real_images + noise_disc)

 # move to device:
 images_disc = images_disc.to(device)`

Unfortunately, I don't really understand why this error occurs, and I would appreciate help a lot!

Merry Christmas. :-)

ghost avatar Dec 24 '20 13:12 ghost