annotated_deep_learning_paper_implementations
annotated_deep_learning_paper_implementations copied to clipboard
MultiHeadAttention parameter setting
Is the output linear layer parameter of the MultiHeadAttention class incorrectly set in mha.py file? in_features should be heads*d_k?
The get_positional_encoding method of position encoder generates an error when d_model is set to odd
Our implementation assumes that heads * d_k = d_model. Need to change that