Abstractive-Summarization-With-Transfer-Learning icon indicating copy to clipboard operation
Abstractive-Summarization-With-Transfer-Learning copied to clipboard

Not added position embedding to BERT encoder Input

Open ankit011094 opened this issue 6 years ago • 1 comments

`

    # Creates segment embeddings for each type of tokens.
    segment_embedder = tx.modules.WordEmbedder(
        vocab_size=bert_config.type_vocab_size,
        hparams=bert_config.segment_embed)
    segment_embeds = segment_embedder(src_segment_ids)

    input_embeds = word_embeds + segment_embeds`

As per BERT paper, the input embeddings are a sum of Embedding Lookup, Segment Embedding and position embedding. As we can see in 'input_embeds = word_embeds + segment_embeds', position embedding is missing.

ankit011094 avatar May 30 '19 12:05 ankit011094

Position embedding is already part of texar internal code

santhoshkolloju avatar Aug 08 '19 04:08 santhoshkolloju