LLM-VM
LLM-VM copied to clipboard
Fix Llama2 implementation: Cannot import name 'Llama2Tokenizer' from 'transformers'
I created a quickstart test file which is mainly doing the same that appears on the documentation:
from llm_vm.client import Client
client = Client(
big_model = 'neo',
big_model_config={'model_uri':'EleutherAI/gpt-neo-1.3B'},
small_model ='neo',
small_model_config={'model_uri':'EleutherAI/gpt-neo-125m'})
response=client.complete(prompt='What is Anarchy?', context='')
print(response)
But when I try to run it return it the next error:
python quickstart-test.py
Traceback (most recent call last):
File ".../LLM-VM/quickstart-test.py", line 2, in <module>
from llm_vm.client import Client
File ".../LLM-VM/src/llm_vm/client.py", line 3, in <module>
import llm_vm.onsite_llm as llms
File ".../LLM-VM/src/llm_vm/onsite_llm.py", line 6, in <module>
from transformers import (
ImportError: cannot import name 'Llama2Tokenizer' from 'transformers' (.../LLM-VM/.venv/lib/python3.10/site-packages/transformers/__init__.py
If I checkout at commit 7a5877b6a everything works fine.
I checked the huggingface transformers documentation and llama2 model is using the same transformer I also checked the repo and there is no llama2 folder nor Llama2Tokenizer, Llama2ForCausalLM classes.
I have created a PR were I fix the issue, but I was not able to test it because I am waiting Meta approval to have access to Llama v2 model.
LLama2 uses the same tokenizers as LLama. source : https://huggingface.co/docs/transformers/main/model_doc/llama2 I have fixed this in PR #185 which was for the issue #183
@abhigya-sodani shouldn't this story be closed as completed? Given that PR #162 was fixing this + adding back OpenLlama