llama-lora-fine-tuning
llama-lora-fine-tuning copied to clipboard
Run deepspeed fastchat/train/train_lora.py error. Padding Error
I get this error when I run deepspeed fastchat/train/train_lora.py
ValueError: Asking to pad but the tokenizer does not have a padding token. Please select a token to use as pad_token (tokenizer.pad_token = tokenizer.eos_token e.g.) or add a new pad token via tokenizer.add_special_tokens({'pad_token': '[PAD]'}).
Upgrading fschat did not help.
add: tokenizer.add_special_tokens( {
"pad_token": "<PAD>",
}
)
under this line:tokenizer.pad_token = tokenizer.unk_token