fei zuo
fei zuo
good idea~
> we are the same problem ? #881 I'm not sure yet, but I've been using unsloth fine-tuning for 2 months, and I have a distinct feeling that the fine-tuning...
> @githubzuoyi Apologies on the issue! Do you know which model you are finetuning? Is it Llama 3.1? I am using the 2024.8 version to fine-tune the llama3 model, to...
> Today I updated the unsloth version for the first time, using 2024.8, and found a strange phenomenon. The fine-tuning results using the 2024.4 version were very good, but the...
> @githubzuoyi You could try a May 4th version which included some Llama-3 fixes. The commit is `a93a885c286934c9c7467324054ca3f9d526a2bd` > > Or an April 21st one which solved some more Llama-3...
Thank you very much! I used the old version `!pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git@a93a885c286934c9c7467324054ca3f9d526a2bd"` and downgraded the transformers version to '4.39.0', the finetune result of 'Llama3 8B' is normal.
> Hi @githubzuoyi also interested in this issue, tried installing unsloth with that commit but getting the "TypeError: LlamaRotaryEmbedding.**init**() got an unexpected keyword argument 'config' " error again, how did...
Thanks for this great work! Is there any plan to release the training data recently? We plan to use the VAB testing results as evaluation content.