mlx-swift-examples
mlx-swift-examples copied to clipboard
Make mlx-vlm examples in swift
Consider porting some models from https://github.com/Blaizzy/mlx-vlm to swift
e.g.
- LLaVa llava-hf/LLaVA-NeXT-Video-7B-hf
- Qwen2 VL: Qwen/Qwen2-VL-2B-Instruct
- Llama 3.2 Vision: meta-llama/Llama-3.2-11B-Vision-Instruct
- Phi-3 Vision microsoft/Phi-3-vision-128k-instruct
- PaliGemma google/paligemma-3b-mix-224
Currently, I am working on porting Llama 3.2 VLM to Swift. It would be great if we could make the vlm a separate package so that people can easily pull it down as a dependency and integrate it into their applications, for example, add vlm support for ChatMLX.
If someone can put together the basic pipeline for one vision model, I can probably port the others to Swift fairly quickly.
I am working on it right now and have paligemma done (well, not debugged but callable). I am working on how to structure the code with regard to the LLM library -- they should share code where possible.
I will try and put up the branch with what I have today. Next week will be busy so it might be two weeks from now before it is really ready.
Fantastic, thank you! Once that's in place, I'll start working on some of the other models (and will post here first to avoid duplication of work).
OK, you can see what I have -- more work to be done but the eval loop is worked out.
#151
This continues -- I have most of the refactoring done and llm-tool has a hard coded call to paligemma. I need to implement a second VLM (qwen2_vl) so I can make sure I have the right shape for the APIs.
As mentioned before this will be a breaking change in the API (so I will do a major version bump) but it should be pretty easy to adopt. Hopefully a new import and renaming a couple things: I will produce a guide when it is ready.
Thanks @davidkoski, your work is much appreciated! Once the API is stable, I'll try to port some of the other VLMs.
@davidkoski @DePasqualeOrg did either of you get qwen 2 vl working in swift?
It is implemented in the branch right now but still lacks the image processor -- that is what I am starting on next.
you are doing god's work @davidkoski ! If you need help lmk! Also do you know what would be necessary to go from image processing to video processing?
@davidkoski https://github.com/Blaizzy/mlx-vlm/pull/97 here is a PR from mlx-vlm that might help!
Yes, this first version won't have it but it should be straightforward to add. Qwen2VL treats an array of images and a video roughly the same but handles them slightly different in the processor. The video ends up with a different value in the t value (temporal? time?) when it constructs the thw array.
yes youre right about the array of image handling! I tried out a rough version of Qwen2VL and the memory usage on any reasonably sized video is absurd!
Seems like this might not be the architecture to support practical on device video processing...
btw, @davidkoski is there a way to set up a LLM api on MLX as is done with llama.cpp or tools like LM Studio? I have done this with llama.cpp but want to have the performance boost of MLX to see whats possible :)
Thanks again for all your great work I know you have been really involved with MLX from the start!
btw, @davidkoski is there a way to set up a LLM api on MLX as is done with llama.cpp or tools like LM Studio? I have done this with llama.cpp but want to have the performance boost of MLX to see whats possible :)
I am not sure what kind of API you mean -- certainly there is an API for preparing a prompt and generating tokens, but I think you mean something different.
Probably the answer is yes, but it might be something you would have to build, e.g. if you wanted a web service.
Thanks for your work on the vlm branch @davidkoski . Using llm-tool i can get paligemma to work with the following flag: --model mlx-community/paligemma-3b-mix-448-8bit
but i can't get qwen2_vl to work using --model mlx-community/Qwen2-VL-2B-Instruct-4bit
any assistance ?
Thanks for your work on the vlm branch @davidkoski . Using llm-tool i can get paligemma to work with the following flag: --model mlx-community/paligemma-3b-mix-448-8bit
but i can't get qwen2_vl to work using --model mlx-community/Qwen2-VL-2B-Instruct-4bit
any assistance ?
That is the version of the model I was using. What error do you see, or what output? I was using the prompt "describe the image in English" (because it often output Chinese text and this seemed pretty reliable in getting it to output English).
Here are the flags im using:
vlm --model mlx-community/Qwen2-VL-2B-Instruct-4bit --prompt "describe image in english" --image /Users/pathtoimage/image.png
get this output when trying to use that model:
LXNN/Module.swift:515: Fatal error: 'try!' expression unexpectedly raised an error: MLXNN.UpdateError.unableToCollectModulesFromContainer(base: "PatchMerger", key: "mlp")
@davidkoski
Here are the flags im using:
vlm --model mlx-community/Qwen2-VL-2B-Instruct-4bit --prompt "describe image in english" --image /Users/pathtoimage/image.pngget this output when trying to use that model:
LXNN/Module.swift:515: Fatal error: 'try!' expression unexpectedly raised an error: MLXNN.UpdateError.unableToCollectModulesFromContainer(base: "PatchMerger", key: "mlp")@davidkoski
Ah, that looks like: https://github.com/ml-explore/mlx-swift/pull/164
Make sure that your mlx-swift is using the 0.21.0 (or higher) tag. I wonder if you still have 0.18?
That was the issue! Thank you - it's working great!!
Closing this -- we have two models (qwen2-vl and paligemma). More can be added over time.