Prince Canuma
Prince Canuma
Thanks @madroidmaq I'm working on it here: #41
It should be available soon.
you can read more about the supported models here: https://github.com/Blaizzy/mlx-vlm/blob/main/mlx_vlm/LORA.MD
Hi @jjboi8708 Yes, this will be fixed, From what I can telln you ran out of memory. Could you share your machine specs and the text you were processing so...
I agree with you! Could you send a PR and add some metrics like performance (acc and performance)?
done ✅
@jrp2014 good question! In general you can find the correct models in the [mlx-community](https://huggingface.co/mlx-community) repo. They are usually converted and uploaded there before the release. We currently support the Pixtral...
Install from source. I recently merged a PR fixing all the bugs
``` pip install git+https://github.com/Blaizzy/mlx-vlm.git ```
Uninstall and reinstall from source. It seems you have an older version. Check the version you have installed.