stanford_alpaca
stanford_alpaca copied to clipboard
Solve BUG:AttributeError: module transformers has no attribute LLaMATokenizer
I wang to follow the guide below.
Given Hugging Face hasn't officially supported the LLaMA models, we fine-tuned LLaMA with Hugging Face's transformers library by installing it from a particular fork (i.e. this PR to be merged). The hash of the specific commit we installed was 68d640f7c368bcaaaecfc678f11908ebbd3d6176. but while click PR and try to run the example:
tokenizer = transformers.LLaMATokenizer.from_pretrained("/output/path/tokenizer/") model = transformers.LLaMAForCausalLM.from_pretrained("/output/path/llama-7b/") batch = tokenizer( "The primary use of LLaMA is research on large language models, including", return_tensors="pt", add_special_tokens=False ) batch = {k: v.cuda() for k, v in batch.items()} generated = model.generate(batch["input_ids"], max_length=100) print(tokenizer.decode(generated[0]))I got an error: AttributeError: module transformers has no attribute LLaMATokenizer if you meet same bug, you just change your code to:tokenizer = transformers.LlamaTokenizer.from_pretrained("/output/path/tokenizer/") model = transformers.LlamaForCausalLM.from_pretrained("/output/path/llama-7b/") batch = tokenizer( "The primary use of LLaMA is research on large language models, including", return_tensors="pt", add_special_tokens=False ) batch = {k: v.cuda() for k, v in batch.items()} generated = model.generate(batch["input_ids"], max_length=100) print(tokenizer.decode(generated[0]))This bug is caused by incorrect letter capitalization
I see the same but fixing the capitalization didnt fix for me

Am using transformers 4.27.1, is it a different version?
even i got the same error , please suggest how to fix this
68d640f7c368bcaaaecfc678f11908ebbd3d6176
we are getting this error, and would appreciate your help
ValueError: Tokenizer class LlamaTokenizer does not exist or is not currently imported.
Am using transformers 4.27.1, is it a different version?
I didn't install transformers in pip, I download transformers in github in branch "llama_push" an move the downloaded file into conda
Similar to the previous answers, the following steps works for me:
- Git clone from branch llama_push.
- cd into this repo, git checkout 68d640f7c368bcaaaecfc678f11908ebbd3d6176
- Install the "transformers" package by running the command "python setup.py install".
- Consolidate all output files from the two subfolders in the PR (https://github.com/huggingface/transformers/pull/21955) into a single folder.
Yeah you have to install from Transformers github. I had thought since it was merged it was in an updated pip package but its not yet.
pip install git+https://github.com/huggingface/transformers.git works for me
LlamaTokenizer instead of LLaMATokenizer
@ruian0 Thanks for your idea, I fixed this bug, but I faced an another bug: Exception: Could not find the transformer layer class to wrap in the model. Do you know how to correct this problem?
Another nice solution:
Install transformer lib by running this command: pip install -q git+https://github.com/zphang/transformers@c3dc391 that worked well here
transformers.LLaMATokenizer is change to transformers.LlamaTokenizer