seq2seq icon indicating copy to clipboard operation
seq2seq copied to clipboard

What's the exact Pytorch and Torchtext version for your code? I am trying to downgrade to a previous version in order to avoid the Multi30k.split() problem but failed.

Open yaoyiran opened this issue 6 years ago • 5 comments

What's the exact Pytorch and Torchtext version for your code? I am trying to downgrade to a previous version in order to avoid the Multi30k.split() problem but failed.

yaoyiran avatar Oct 16 '18 03:10 yaoyiran

I can't remember.. sorry I need to re-write this in PyTorch v1.0

keon avatar Oct 16 '18 07:10 keon

Here is another bug that I met for the code, do you have any idea on how to revise it?

/workspace/seq2seq/model.py:47: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument. energy = F.softmax(self.attn(torch.cat([hidden, encoder_outputs], 2))) Traceback (most recent call last): File "train.py", line 112, in main() File "train.py", line 94, in main en_size, args.grad_clip, DE, EN) File "train.py", line 52, in train output = model(src, trg) File "/home/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/workspace/seq2seq/model.py", line 105, in forward output, hidden, encoder_output) File "/home/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/workspace/seq2seq/model.py", line 75, in forward attn_weights = self.attention(last_hidden[-1], encoder_outputs) File "/home/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/workspace/seq2seq/model.py", line 43, in forward return F.relu(attn_energies, dim=1).unsqueeze(1) TypeError: relu() got an unexpected keyword argument 'dim'

yaoyiran avatar Oct 16 '18 07:10 yaoyiran

@yaoyiran you might wanna remove the dim keyword arg from relu and also add dim=2 in the softmax and see if that resolves the issue. What version of pytorch are you using?

pskrunner14 avatar Oct 16 '18 10:10 pskrunner14

after upgrading torchtext to 0.3x,you can solve this problem. I used to use 0.23, and ran into the same problem as you.

Linao1996 avatar Oct 20 '18 12:10 Linao1996

@pskrunner14
why not update your code in git. I also use your attention model and get the same error. I use the pytorch 1.0

lorybaby avatar Feb 04 '19 21:02 lorybaby