llama-lora-fine-tuning
llama-lora-fine-tuning copied to clipboard
Do the fine tuning and seting --model_max_length 2048 issue
Token indices sequence length is longer than the specified maximum sequence length for this model (2189 > 2048). Running this sequence through the model will result in indexing errors