mik8142

Results 2 comments of mik8142

Hey, may be you can help me: I try to finetune llama: ```python base_model = "meta-llama/Llama-3.2-1B-Instruct" torch_dtype = torch.float16 attn_implementation = "eager" bnb_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch_dtype, bnb_4bit_use_double_quant=True, )...

I found solution that works for me: ```python if hasattr(tokenizer, "chat_template") and tokenizer.chat_template is not None: tokenizer.chat_template = None # Reset the chat template model, tokenizer = setup_chat_format(model, tokenizer) ```