Prince Canuma
Prince Canuma
Got it! Please install mlx-vlm from source, try again and let me know if the issue persists I will make a new release later today with the fix in main.
My pleasure! It's done ✅
Thank you very much, that means a lot! I'm happy I could help ❤️
Working on it :) This is related to #241
Hey @chigkim The OpenAI API support is done for mlx-vlm in #321 However, I understand the need for a more complete and cohesive solution. That's why I had built FastMLX...
> Oh cool! Since https://github.com/Blaizzy/mlx-vlm/pull/321 with OpenAI endpoint is merged, should we close this? Thanks! Not yet, I'm still cooking :)
> Just wanted to share that I did a proof-of-concept of porting Kokoro to iOS devices. I took the MLX Python code and ported it to MLX Swift, and the...
I was working towards MLX-Swift, tho swift it's not my strong suit. I believe there is a lot of potential.
In addition to what Lucas said, you can also do `mx.metal.clear_cache()` at the end of each generation
Yes, he is accumulating results without releasing resources. In regard to evaluation, on v0.0.2 we added eval on Kokoro.