USMCM1A1
USMCM1A1
It would be great to convert the model files and adapter to a GGUF file.
@l0d0v1c : I dropped ".fuse' from the python fuse.py step and reformatted the hyphens and got that work. That second part has nothing to do with MLX, correct? I have...
@l0d0v1c I'm struggling with this (I'm a linguist with no computer/data science training). I've cloned the llama.cpp repo. If the fused/renamed model was in /Users/williammarcellino/mlx-examples/lora/lora_fused_model_GrKRoman_1640 how would I format a...
@l0d0v1c Awesome that worked! I have a working gguf_q8 version up and running in LM Studio 😊 Thank you so much. Also: my ft happens to be on the classical...
**Edit** (shared raw text without instruction formatting by mistake) I'm using a Mistral base which marks instructions w/ [INST] & [/INST]. So did: `{"text": "[INST] Q: \"What significant action did...
Hi maeitar, First, thanks for writing this. I appreciate your effort, and made a Paypal contribution. I think the current version isn't working. I've tried it on both Chrome (Windows...
I tried a fine-tune just now and got no results--the model+adapter gives the exact same responses as the base model at inference. I tried this format but no idea if...
> Was the model actually training? Can you add the log here? Thanks for responding and also for being patient with me (English PhD here fumbling around and kind of...
> Thanks, looks like it is training very very slowly (the loss didn't change much over the 300+ iterations), not sure why. The train loss should go down faster. Were...
> > What model are you training? It looks like you ran out of memory... > > By the log, I just meant the output of the `python lora.py ...`....