nanoGPT
nanoGPT copied to clipboard
Why using learnable position embedding just like token embedding?
https://github.com/karpathy/nanoGPT/blob/7f74652843d8cbea31e2a9c986caf4a0ad452a6c/model.py#L136
I'd like to ask the reason why nanoGPT don't try other kind of positional embeddings? What is the advantage of using a learnable position embedding? Thanks.