pytorch-openai-transformer-lm icon indicating copy to clipboard operation
pytorch-openai-transformer-lm copied to clipboard

Clarifying last step of the 'transform_roc' function

Open sharpsy opened this issue 6 years ago • 7 comments

Thanks for the implementation, it is a big help! I'm trying to wrap my head around the code and I had some questions here and there. One of them was the last step in the transform_roc function. It was not clear to me why would they augment the input sequence with another dimension containing increasing embedding ids, ids that are greater than tokens from source data vocabulary. Namely: xmb[:, :, :, 1] = np.arange(n_vocab + n_special, n_vocab + n_special + n_ctx)

The explanation is in the paper itself: We used learned position embeddings instead of the sinusoidal version proposed in the original work. Those are ids of position embeddings and are added to the input data in the forward() method of the TransformerModel . It took me some time to find this and we can help others that struggle with the same line.

I would suggest adding comments to the code that will help with understanding.

In the transform_roc function, before the last step: # Position information that is added to input embeddings in the TransformerModel And then in the forward() method of the TransformerModel before the h = e.sum(dim=2) line: # Add position embeddings to the input embeddings

sharpsy avatar Jul 18 '18 10:07 sharpsy

I just found https://github.com/huggingface/pytorch-openai-transformer-lm/issues/12 and https://github.com/openai/finetune-transformer-lm/issues/9

Not sure how I missed the first issue, I have looked at the open issues - but it shows that comments in this code will help with understanding.

sharpsy avatar Jul 18 '18 10:07 sharpsy

Hi @sharpsy,

I also think that this comment would help people understand this code. Do you want to add it yourself or would you like me to create the pull request?

rodgzilla avatar Jul 18 '18 10:07 rodgzilla

Ok, I'll create the pull request.

sharpsy avatar Jul 18 '18 11:07 sharpsy

What I don't understand is why we need to start from n_vocab + n_special not e.g. from 0. Any explanations?

teucer avatar Jul 18 '18 12:07 teucer

Tokens from 0 to n_vocab are tokens from the vocabulary (data), from n_vocab to n_vocab+n_special are special tokens: _start_, _delimiter_ and _classify_. All above, up to n_vocab + n_special + n_ctx are for encoding the position.

sharpsy avatar Jul 18 '18 12:07 sharpsy

The indices from 0 to n_vocab + n_special are already taken in the embedding matrix by the vocabulary words (n_vocab) and the special tokens such as _start_, _delimiter_ and _classify_.

rodgzilla avatar Jul 18 '18 12:07 rodgzilla

thx!

teucer avatar Jul 18 '18 12:07 teucer