dual-obsidian-client
dual-obsidian-client copied to clipboard
feat: add gpt-neo model handling
This PR enables neo-gpt model loading using the same API from transformers from huggingface.
Related: #40
Google Collab with Neo-GPT models support:
https://colab.research.google.com/drive/1xqEZeZY3aYl4w859Ej4sCsX-2LxBGU1l?usp=sharing
Thanks for the PR! As mentioned on Discord (https://discord.com/channels/817119487999606794/825717174257319974/833584636533407745), I think using the AutoModel class from transformers would make the implementation somewhat simpler, as you can simply give it the local path to the model and it can figure out what's in there. What do you think? Not sure about the tokenizer, though, but I think both GPT-2 models and GPT-Neo use similar tokenizers?
Hi @paulbricman!
I've followed the transformers docs and both models use the same GPT2Tokenizer function, but i´ts totally possible to replace the specific code to use AutoTokenizer, AutoConfig and AutoModel instead.
After work, I’ll take a look on this.
@paulbricman done!