Alexander Gusev
Alexander Gusev
Okay saw now that you already support it! Nice! The 1B one seems to not be able to be converted through this mlx_vlm library, but it is not able to...
@psm-2 Same, this is what I get when running the 4B model, got it from mlx community, havent tried other versions of it. Oh and seems @vlbosch also gets this
More people having same issue as above: https://github.com/lmstudio-ai/lmstudio-bug-tracker/issues/513
I am a mere consumer of this technology - not capable enough to create it myself.
Thanks brother, doing lords work. I hope Apple puts more resources on MLX, I think its very useful and for a lot of people
Wow thanks, did not know about that conversion space. Just tried it and it worked :D That means I can convert bigger models than my 16gb ram allows too... This...
Oh but its not possible to upload a non quantized version it seems like? Cause when I upload the complete MLX version non-quantized that usually gets as many downloads as...