MedGemma support
Hello, is there going to be MedGemma support for the 4B model? I see someone had created the models in mlx-community https://huggingface.co/collections/mlx-community/medgemma-682d2f00f0ed156975e514f0 but they seem to support text only. The 4B model is supposed to be multimodal, but I see the conversion is done via mlx-lm. Thank you!
I see the processor is the one from Gemma 3. I suppose (if I am not missing any details) that means if the conversion is done with mlx-vlm instead of mlx-lm the model would be supported?
Hey @JoeJoe1313
You are correct! ✅
The multimodal should be converted using mlx-vlm and not mlx-lm
Please feel free to re-upload using mlx-vlm, if you can't just ping me and I will do it later today.
Hey guys, this has been fixed. I reran the conversions with mlx-vlm!
Hi guys, if I convert the model myself I have no problem running it, but when I download and run the updated models I am getting this error:
ValueError: Cannot use apply_chat_template because this processor does not have a chat template.
Is anyone else experiencing this behaviour?
This is happening because new models are migrating to jinja files instead of json for chat template.
I fixed it here #376.
You can install from source for now it should be available on the next release
Hey guys, can this be closed?