practical-pytorch icon indicating copy to clipboard operation
practical-pytorch copied to clipboard

Embedding layer dimensionality from seq2seq-batch example

Open jsuit opened this issue 8 years ago • 0 comments

The docs say that the embedding layer has the following input/output dimensionality: Input: LongTensor (N, W), N = mini-batch, W = number of indices to extract per mini-batch Output: (N, W, embedding_dim)

Yet, the tutorial gives as input for the embedding layer with the dimensionality W,N, and gets the dimensionality of the output as W,N, embedding_dim. If I understand this correctly (which I might not be), the order of the dimensions differs between your seq2seq batches example and the docs. If so, should you be transposing the matrices before you the inputs go through the embedding layer?

jsuit avatar Jul 28 '17 20:07 jsuit