NickyDark1

Results 9 comments of NickyDark1

![image](https://github.com/unslothai/unsloth/assets/123802672/dd6a2521-5f22-4813-98a2-45d4abff76e0)

example token special: "32005": {   | "content": "",   | "lstrip": false,   | "normalized": false,   | "rstrip": true,   | "single_word": false,   | "special": true   | }, https://huggingface.co/NickyNicky/Phi-3-mini-128k-instruct_function/blob/main/tokenizer_config.json

similar? Would it make a difference to add the normal tokens and the special ones? special_tokens_dict = {'additional_special_tokens': ['[C1]','[C2]','[C3]','[C4]']} num_added_toks = tokenizer.add_special_tokens(special_tokens_dict) model.resize_token_embeddings(len(tokenizer))

version: 4.36.2 new -> transformers==4.38.0 (no support)

only support this model? # Load a model from Hugging Face's Transformers model_name = "bert-base-uncased"