unsloth icon indicating copy to clipboard operation
unsloth copied to clipboard

how to add eos token

Open gamercoder153 opened this issue 1 year ago • 28 comments

how to add eos token

gamercoder153 avatar May 03 '24 17:05 gamercoder153

Our conversational notebooks add eos_tokens to llama-3 for eg: https://colab.research.google.com/drive/1XamvWYinY6FOSX9GLvnqSjjsNflxdhNc?usp=sharing

All our notebooks on our Github page here: https://github.com/unslothai/unsloth?tab=readme-ov-file#-finetune-for-free add eos tokens

danielhanchen avatar May 04 '24 10:05 danielhanchen

515212 (1) Iam facing this error with your colab notebook during inferencing: https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing

gamercoder153 avatar May 04 '24 12:05 gamercoder153

I'm facing the same problem here.

mxtsai avatar May 04 '24 13:05 mxtsai

@mxtsai which model?

gamercoder153 avatar May 04 '24 14:05 gamercoder153

I've tried LLama 3 and other models. Not sure where the issue is...

mxtsai avatar May 04 '24 15:05 mxtsai

oh wait llama-3 base right hmm where are you all doing inference - ollama? llama.cpp?

danielhanchen avatar May 05 '24 03:05 danielhanchen

@danielhanchen in colab after finetuning

gamercoder153 avatar May 05 '24 10:05 gamercoder153

@danielhanchen in colab after finetuning

I was having the same issue and created an issue #416 . I've posted a solution here.

KillerShoaib avatar May 05 '24 10:05 KillerShoaib

@KillerShoaib Man thanks a lot for fixing, I really appreciate that. Can u explain to me where to add that I am using Google Colab t4 GPU: https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing

gamercoder153 avatar May 05 '24 15:05 gamercoder153

@KillerShoaib @gamercoder153 @mxtsai Apologies I just fixed it! No need to change code - I updated the tokenizer configs so all should be fine now!

danielhanchen avatar May 05 '24 16:05 danielhanchen

@KillerShoaib Man thanks a lot for fixing, I really appreciate that. Can u explain to me where to add that I am using Google Colab t4 GPU: https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing

as @danielhanchen mentioned, you don't need to change the code anymore, The bug has been fixed.

KillerShoaib avatar May 05 '24 18:05 KillerShoaib

great! thanks a lot guys @KillerShoaib @danielhanchen

gamercoder153 avatar May 05 '24 18:05 gamercoder153

@danielhanchen @KillerShoaib I checked it once again it literally the same 78415 (1)

gamercoder153 avatar May 05 '24 19:05 gamercoder153

@danielhanchen @KillerShoaib I checked it once again it literally the same 78415 (1)

Okay, I think you're using your finetuned model which was finetuned on top of old unsloth llama 3 (where pad token and eos token were the same). In that case, you need to change the pad token value.

Here is the code to do that:

################################### Existing Colab Code ###################################

from unsloth import FastLanguageModel
import torch
max_seq_length = 2048 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.

# 4bit pre quantized models we support for 4x faster downloading + no OOMs.
fourbit_models = [
    "unsloth/mistral-7b-bnb-4bit",
    "unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
    "unsloth/llama-2-7b-bnb-4bit",
    "unsloth/gemma-7b-bnb-4bit",
    "unsloth/gemma-7b-it-bnb-4bit", # Instruct version of Gemma 7b
    "unsloth/gemma-2b-bnb-4bit",
    "unsloth/gemma-2b-it-bnb-4bit", # Instruct version of Gemma 2b
    "unsloth/llama-3-8b-bnb-4bit", # [NEW] 15 Trillion token Llama-3
] # More models at https://huggingface.co/unsloth

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "your_finetuned_model_name",   ##### Change the name according to your finetuned model #####
    max_seq_length = max_seq_length,
    dtype = dtype,
    load_in_4bit = load_in_4bit,
    # token = "hf_...", # use one if using gated models like meta-llama/Llama-2-7b-hf
)

## If your model already saved as LoRA adapter then you don't need to use .get_peft_model()
model = FastLanguageModel.get_peft_model(
    model,
    r = 16, # Choose any number > 0 ! Suggested 8, 16, 32, 64, 128
    target_modules = ["q_proj", "k_proj", "v_proj", "o_proj",
                      "gate_proj", "up_proj", "down_proj",],
    lora_alpha = 16,
    lora_dropout = 0, # Supports any, but = 0 is optimized
    bias = "none",    # Supports any, but = "none" is optimized
    # [NEW] "unsloth" uses 30% less VRAM, fits 2x larger batch sizes!
    use_gradient_checkpointing = "unsloth", # True or "unsloth" for very long context
    random_state = 3407,
    use_rslora = False,  # We support rank stabilized LoRA
    loftq_config = None, # And LoftQ
)


################## Additional Code to change pad token value ###################

tokenizer.add_special_tokens({"pad_token": "<|reserved_special_token_0|>"})
model.config.pad_token_id = tokenizer.pad_token_id # updating model config
tokenizer.padding_side = 'right' # padding to right (otherwise SFTTrainer shows warning)

################## Rest of the colab code ###################
.
.
.

After changing the pad token value you need to fine-tune the model again so that it can learn to predict EOS token. Try few iterations (i.e: 30-50) and check if model is able to generate eos token or not.

This example is for those models that have been fine-tuned on top of old unsloth llama 3 ( same pad & eos token). Unsloth has updated their model. If any of you using their current llama 3 model then you won't have to follow these steps. Follow the original Colab notebook

KillerShoaib avatar May 06 '24 06:05 KillerShoaib

@KillerShoaib Iam using this colab notebook : https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing from their github I am using llama3-8b-instruct model : https://huggingface.co/unsloth/llama-3-8b-Instruct

gamercoder153 avatar May 06 '24 10:05 gamercoder153

@KillerShoaib Iam using this colab notebook : https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing from their github I am using llama3-8b-instruct model : https://huggingface.co/unsloth/llama-3-8b-Instruct

I've just downloaded unsloth/llama-3-8b-Instruct and verified its pad token and eos token value. They are different as @danielhanchen mentioned he has solved the issue.

issuepic1

I even trained the model on alpaca dataset for 60 epochs and got an answer with an eos token

issuepic2

Everything is working fine on my end. Are you sure you aren't using already fine-tuned version of old llama 3 (which had same eos & pad token) that you've saved locally (or huggingface hub) and loading it again?

KillerShoaib avatar May 06 '24 13:05 KillerShoaib

@KillerShoaib No, Iam not using any older finetuning model. Let me try it once again

gamercoder153 avatar May 06 '24 17:05 gamercoder153

@KillerShoaib Its the same stuff dude!! just generates <|end_of_text|> sometimes and goes on loop like this image till 128 max new tokens

and if text streaming is true it does the same thing again

gamercoder153 avatar May 06 '24 18:05 gamercoder153

@KillerShoaib Its the same stuff dude!! just generates <|end_of_text|> sometimes and goes on loop like this image till 128 max new tokens

and if text streaming is true it does the same thing again

since you're getting eos token sometimes then there is no problem in the llama 3 model. You need to finetune it for more iterations. The model is still learning to predict the eos token.

KillerShoaib avatar May 07 '24 16:05 KillerShoaib

adding pad_token_id solved this issue for me outputs = model.generate(**inputs, max_new_tokens = 200, use_cache = False, pad_token_id=tokenizer.pad_token_id)

adeel-learner avatar May 13 '24 09:05 adeel-learner

@adeel-maker which notebook are u using?

gamercoder153 avatar May 13 '24 11:05 gamercoder153

@gamercoder153 Kaggle or Google colab

adeel-learner avatar May 13 '24 13:05 adeel-learner

@adeel-maker can u share it

gamercoder153 avatar May 13 '24 17:05 gamercoder153

@gamercoder153 same as provided by unsloth, just putting my data there!

adeel-learner avatar May 14 '24 10:05 adeel-learner

adding pad_token_id solved this issue for me outputs = model.generate(**inputs, max_new_tokens = 200, use_cache = False, pad_token_id=tokenizer.pad_token_id)

in the notebook where did u add this section of code @adeel-maker

gamercoder153 avatar May 14 '24 12:05 gamercoder153

@gamercoder153 at inference portion of this notebook, right after the training portion!

adeel-learner avatar May 14 '24 14:05 adeel-learner

@adeel-maker ok let me try

gamercoder153 avatar May 14 '24 15:05 gamercoder153

If you're using the instruct model, you need to change the EOS token. The tokenizer still has the EOS token as <|end_of_speech|> when it should be <|eot_id|>. When you build your Alpaca dataset, change this line:

EOS_TOKEN = tokenizer.eos_token # Must add EOS_TOKEN

to this:

EOS_TOKEN = <|eot_id|> # Must add EOS_TOKEN

shensmobile avatar May 15 '24 16:05 shensmobile