unsloth
unsloth copied to clipboard
NotImplementedError: Unsloth: unsloth/Llama-3.2-11B-Vision-Instruct-bnb-4bit not supported yet!
I am trying to finetune LLaMA 3.2 11B Vision Instruct on text only, but according to Unsloth this model is not supported yet. Is there a plan to support this. This is what I get from the command output on my local machine (not Colab notebook):
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
Traceback (most recent call last):
File "/home/ubuntu/aurora-PEFT/main.py", line 16, in <module>
model, tokenizer = FastLanguageModel.from_pretrained(
File "/home/ubuntu/aurora-PEFT/venv/lib/python3.10/site-packages/unsloth/models/loader.py", line 304, in from_pretrained
raise NotImplementedError(
NotImplementedError: Unsloth: unsloth/Llama-3.2-11B-Vision-Instruct-bnb-4bit not supported yet!
Make an issue to https://github.com/unslothai/unsloth!
Thank you for your time and help in advance! Any other information is available upon request.