pytorch_RVAE icon indicating copy to clipboard operation
pytorch_RVAE copied to clipboard

Runtime error during 'python train.py'

Open ramyaragh opened this issue 7 years ago • 4 comments

I successfully ran the word embedding, but while training I get this runtime error. Any suggestions?

File "train.py", line 59, in cross_entropy, kld, coef = train_step(iteration, args.batch_size, args.use_cuda, args.dropout) File "~/pytorch_RVAE/model/rvae.py", line 104, in train z=None) File "~/pytorch/torch/lib/python2.7/site-packages/torch/nn/modules/module.py", line 325, in call result = self.forward(*input, **kwargs) File "~/pytorch_RVAE/model/rvae.py", line 64, in forward encoder_input = self.embedding(encoder_word_input, encoder_character_input) File "/~/pytorch/torch/lib/python2.7/site-packages/torch/nn/modules/module.py", line 325, in call result = self.forward(*input, **kwargs) File "~/pytorch_RVAE/selfModules/embedding.py", line 47, in forward character_input = self.TDNN(character_input) File "~/pytorch/torch/lib/python2.7/site-packages/torch/nn/modules/module.py", line 325, in call result = self.forward(*input, **kwargs) File "~/pytorch_RVAE/selfModules/tdnn.py", line 42, in forward xs = [x.max(2)[0].squeeze(2) for x in xs] RuntimeError: dimension out of range (expected to be in range of [-2, 1], but got 2)

ramyaragh avatar Feb 05 '18 19:02 ramyaragh

delete .squeeze(2)

ruotianluo avatar Feb 10 '18 07:02 ruotianluo

Does not seem to help

preprocessed data was found and loaded Traceback (most recent call last): File "train.py", line 59, in cross_entropy, kld, coef = train_step(iteration, args.batch_size, args.use_cuda, args.dropout) File "~/pytorch_RVAE/model/rvae.py", line 104, in train z=None) File "/anaconda/lib/python3.5/site-packages/torch/nn/modules/module.py", line 325, in call result = self.forward(*input, **kwargs) File "~/pytorch_RVAE/model/rvae.py", line 66, in forward context = self.encoder(encoder_input) File "/anaconda/lib/python3.5/site-packages/torch/nn/modules/module.py", line 325, in call result = self.forward(*input, **kwargs) File "~/pytorch_RVAE/model/encoder.py", line 35, in forward assert parameters_allocation_check(self),
File "~/pytorch_RVAE/utils/functional.py", line 15, in parameters_allocation_check return fold(f_and, parameters, True) or not fold(f_or, parameters, False) File "~/pytorch_RVAE/utils/functional.py", line 2, in fold return a if (len(l) == 0) else fold(f, l[1:], f(a, l[0])) File "~/pytorch_RVAE/utils/functional.py", line 2, in fold return a if (len(l) == 0) else fold(f, l[1:], f(a, l[0])) File "~/pytorch_RVAE/utils/functional.py", line 6, in f_and return x and y File "/anaconda/lib/python3.5/site-packages/torch/autograd/variable.py", line 125, in bool torch.typename(self.data) + " is ambiguous") RuntimeError: bool value of Variable objects containing non-empty torch.FloatTensor is ambiguous

ramyaragh avatar Feb 13 '18 20:02 ramyaragh

That's a different error. What I did for this was just always return True in parameters_allocation_check function.

ruotianluo avatar Feb 13 '18 21:02 ruotianluo

@ruotianluo which version of pytorch do you working on?

SeekPoint avatar Jun 13 '19 14:06 SeekPoint