LLMs-from-scratch
LLMs-from-scratch copied to clipboard
fix issue #664 - inverted token and pos emb layers
fixes #664
Check out this pull request on ![]()
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
Maybe it would make sense removing everything after encoded_text = tokenizer.encode(raw_text) as it's not relevant for exercise, what do you think @rasbt?
Thanks for the fix @casinca . And I also agree with you @d-kleine , the lines seem redundant and I removed them.