pytorch-seq2seq
pytorch-seq2seq copied to clipboard
Possible Inaccuracies in training script
Hi Ben! Thanks for sharing this wonderful tutorial!
I have a question about the training script of 5. Convolutional Seq2Seq. In train()
, the model accepts src
and trg[:,:-1]
to drop the <EOS>
tokens. However, wouldn't it be inaccurate to just drop the last column if the batch of trg consists of sentences of varying length. In case of varying lengths, some sentences' last column may not be <EOS>
token but rather <PAD>
token.
Regards,
P.S. added a screenshot to easily refer