pytorch-seq2seq
pytorch-seq2seq copied to clipboard
Model actually corrects "ground truth" (6 - Attention is All You Need).
Thank you for making the tutorials! I just wanted to bring an observation.
When analyzing the test set in Tutorial 6, you mention that the model produced:
A decent translation with young being omitted
But if we look at the translations we see something even better:
src = ['eine', 'mutter', 'und', 'ihr', 'kleiner', 'sohn', 'genießen', 'einen', 'schönen', 'tag', 'im', 'freien', '.'] trg = ['a', 'mother', 'and', 'her', 'young', 'song', 'enjoying', 'a', 'beautiful', 'day', 'outside', '.'] predicted trg = ['a', 'mother', 'and', 'her', 'son', 'enjoying', 'a', 'beautiful', 'day', 'outside', '.', '
']
The system might have omitted "young" but correctly inferred that the word should be son and not song.
Well spotted, that is an amazing find!
I'll be sure to add a note to the tutorials.