Add support for Gemma 3?
As far as I can tell all Gemma 3 models are multimodal except maybe 1B ones? Not sure but it says all of them on their HF.
Okay saw now that you already support it! Nice!
The 1B one seems to not be able to be converted through this mlx_vlm library, but it is not able to be converted with mlx_lm library either right now.
I already made a PR to MLX-LM to support 1B.
https://github.com/ml-explore/mlx-examples/pull/1336
I already made a PR to MLX-LM to support 1B.
https://github.com/ml-explore/mlx-examples/pull/1336
You're very quick! Great work!
Would that PR also work for text only use of larger models, like 27B?
Yes, it will :)
Thanks!
I just tried the converted model in "--chat"-mode, but as response to a text-only query I get only "< pad >" as output
@Blaizzy The 12b model does not seem to work for me, it just outputs a lot of nonsense... But, when you do add the compatibility, would it be possible to add fine-tuning?
@psm-2 Same, this is what I get when running the 4B model, got it from mlx community, havent tried other versions of it.
Oh and seems @vlbosch also gets this
@psm-2 Same, this is what I get when running the 4B model, got it from mlx community, havent tried other versions of it.
Oh and seems [@vlbosch](https://github.com/vlbosch) also gets this
same on 27B 8bit
More people having same issue as above: https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/513
This should be fixed!
Oh and seems [@vlbosch](https://github.com/vlbosch) also gets this