Daniel Han
                                            Daniel Han
                                        
                                    Apologies I can confirm there are some tokenization issues! Working on a fix
@AliHaider20 Unsure if my temporary fix will fix this - use `pip install --upgrade --force-reinstall --no-cache-dir git+https://github.com/unslothai/unsloth.git` to update. Colab / Kaggle no need to update
@AliHaider20 I think I just fixed inference (hopefully!)
If it still doesn't work, I'll have to check NEFTune separately
Oh tbh I'm not certain - I haven't tried it, but I'm assuming it's resolved?
Actually I JUST noticed NEFTune was NEVER enabled during training, and during inference, it gets enabled, hence the gibberish. As @ivsanro1 suggested, I tried doing it, but I found HF...
@AliHaider20 Kaggle is also fixed!
@erwe324 Oh thanks for all the help across issues - appreciate it :)
Working on an automatic model optimizer :)
@patrickjchen If you clone the Mistral notebook from https://www.kaggle.com/code/danielhanchen/kaggle-mistral-7b-unsloth-notebook as is, it should work. New Kaggle envs are broken for now