ColossalAI
ColossalAI copied to clipboard
[BUG]: AttributeError: 'LlamaActor' object has no attribute 'resize_token_embeddings'
🐛 Describe the bug
When run the Stage3 code https://github.com/hpcaitech/ColossalAI/blob/main/applications/Chat/examples/train_prompts.py using LLaMA, this bug is encountered in line 137:
tokenizer = prepare_llama_tokenizer_and_embedding(tokenizer, actor)
The detailed of this bug:
Traceback (most recent call last):
File "experience.py", line 183, in <module>
tokenizer = prepare_llama_tokenizer_and_embedding(tokenizer, actor)
File "/XXX/XXX/ColossalAI/applications/Chat/coati/utils/tokenizer_utils.py", line 37, in prepare_llama_tokenizer_and_embedding
smart_tokenizer_and_embedding_resize(
File "/XXX/XXX/ColossalAI/applications/Chat/coati/utils/tokenizer_utils.py", line 68, in smart_tokenizer_and_embedding_resize
model.resize_token_embeddings(len(tokenizer))
File "/root/anaconda3/envs/coati/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1269, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'LlamaActor' object has no attribute 'resize_token_embeddings'
This bug is the same as this one: https://github.com/hpcaitech/ColossalAI/issues/3389
Environment
No response
Use the SFT LLaMa-7B model in stage 1 , not the original LLaMa-7B
Thanks! By the way, if I use tokenizer.pad_token = tokenizer.eos_token
to avoid this bug, will it make a big difference in training?