Impossible to fuse Mistral-7b-Instruct-v0.2 after finetuning
Hi everyone,
I'm unable to fuse the model after finetuning, I got this error. Can someone please help? All paths are correct and the adapter works just fine.
antoine@Mac-Studio lora % python fuse.py --model "/Users/antoine/Documents/GitHub/EVD.COVID_ANALYSIS/EVD.COVID_ANALYSIS/Models.nosync/Mistral/Mistral-7B-Instruct-v0.2/" --save-path "/Users/antoine/Documents/GitHub/EVD.COVID_ANALYSIS/EVD.COVID_ANALYSIS/Models.nosync/Mistral/fine-tuned/" --adapter-file "/Users/antoine/Documents/GitHub/EVD.COVID_ANALYSIS/EVD.COVID_ANALYSIS/Models.nosync/Mistral/adapters/adapters.npz" --de-quantize
Loading pretrained model
Traceback (most recent call last):
File "/Users/antoine/Documents/GitHub/EVD.COVID_ANALYSIS/EVD.COVID_ANALYSIS/mlx-examples-main-2/lora/fuse.py", line 55, in
The --model flag should point to the original model (hugging face repo or local path) that you fine tuned with.
- Could you share the output of:
ls /Users/antoine/Documents/GitHub/EVD.COVID_ANALYSIS/EVD.COVID_ANALYSIS/Models.nosync/Mistral/Mistral-7B-Instruct-v0.2/
- Could you share the command you used to fine-tune?