Stable-Pix2Seq
Stable-Pix2Seq copied to clipboard
details of transformer code
Thank you for your work, I have a question about sequence embedding. The screenshot is from transformer.py
When you get sequence embedding, the position embedding has already been added to sequence embedding as fllows:
Why do you input the same position embedding into decoder layer ? After this operation, position embedding is added to sequence embedding twice.